Citric acid is a weak organic acid that has the chemical formula C6H8O7. It occurs in citrus fruits. In biochemistry, it is an intermediate in the citric acid cycle, which occurs in the metabolism of all aerobic organisms. More than a million tons of citric acid are manufactured every year, it is used as an acidifier, as a flavoring and chelating agent. A citrate is a derivative of citric acid. An example of the former, a salt is trisodium citrate; when part of a salt, the formula of the citrate ion is written as C6H5O3−7 or C3H5O3−3. Citric acid exists in greater than trace amounts in a variety of fruits and vegetables, most notably citrus fruits. Lemons and limes have high concentrations of the acid; the concentrations of citric acid in citrus fruits range from 0.005 mol/L for oranges and grapefruits to 0.30 mol/L in lemons and limes. Within species, these values vary depending on the cultivar and the circumstances in which the fruit was grown. Industrial-scale citric acid production first began in 1890 based on the Italian citrus fruit industry, where the juice was treated with hydrated lime to precipitate calcium citrate, isolated and converted back to the acid using diluted sulfuric acid.
In 1893, C. Wehmer discovered. However, microbial production of citric acid did not become industrially important until World War I disrupted Italian citrus exports. In 1917, American food chemist James Currie discovered certain strains of the mold Aspergillus niger could be efficient citric acid producers, the pharmaceutical company Pfizer began industrial-level production using this technique two years followed by Citrique Belge in 1929. In this production technique, still the major industrial route to citric acid used today, cultures of A. niger are fed on a sucrose or glucose-containing medium to produce citric acid. The source of sugar is corn steep liquor, hydrolyzed corn starch or other inexpensive sugary solutions. After the mold is filtered out of the resulting solution, citric acid is isolated by precipitating it with calcium hydroxide to yield calcium citrate salt, from which citric acid is regenerated by treatment with sulfuric acid, as in the direct extraction from citrus fruit juice.
In 1977, a patent was granted to Lever Brothers for the chemical synthesis of citric acid starting either from aconitic or isocitrate/alloisocitrate calcium salts under high pressure conditions. This produced citric acid in near quantitative conversion under what appeared to be a reverse non-enzymatic Krebs cycle reaction. In 2007, worldwide annual production stood at 1,600,000 tons. More than 50% of this volume was produced in China. More than 50% was used as an acidity regulator in beverages, some 20% in other food applications, 20% for detergent applications and 10% for related applications other than food, such as cosmetics, pharmaceutics and in the chemical industry. Citric acid was first isolated in 1784 by the chemist Carl Wilhelm Scheele, who crystallized it from lemon juice, it can exist either as a monohydrate. The anhydrous form crystallizes from hot water, while the monohydrate forms when citric acid is crystallized from cold water; the monohydrate can be converted to the anhydrous form at about 78 °C.
Citric acid dissolves in absolute ethanol at 15 °C. It decomposes with loss of carbon dioxide above about 175 °C. Citric acid is considered to be a tribasic acid, with pKa values, extrapolated to zero ionic strength, of 5.21, 4.28 and 2.92 at 25 °C. The pKa of the hydroxyl group has been found, by means of 13C NMR spectroscopy, to be 14.4. The speciation diagram shows that solutions of citric acid are buffer solutions between about pH 2 and pH 8. In biological systems around pH 7, the two species present are the citrate ion and mono-hydrogen citrate ion; the SSC 20X hybridization buffer is an example in common use. Tables compiled for biochemical studies are available. On the other hand, the pH of a 1 mM solution of citric acid will be about 3.2. The pH of fruit juices from citrus fruits like oranges and lemons depends on the citric acid concentration, being lower for higher acid concentration and conversely. Acid salts of citric acid can be prepared by careful adjustment of the pH before crystallizing the compound.
See, for example, sodium citrate. The citrate ion forms complexes with metallic cations; the stability constants for the formation of these complexes are quite large because of the chelate effect. It forms complexes with alkali metal cations. However, when a chelate complex is formed using all three carboxylate groups, the chelate rings have 7 and 8 members, which are less stable thermodynamically than smaller chelate rings. In consequence, the hydroxyl group can be deprotonated, forming part of a more stable 5-membered ring, as in ammonium ferric citrate, 5Fe2·2H2O. Citric acid can be esterified at one or more of the carboxylic acid functional groups on the molecule, to form any of a variety of mono-, di-, tri-, mixed esters. Citrate is an intermediate in the TCA cycle, a central metabolic pathway for animals and bacteria. Citrate synthase catalyzes the condensation of oxaloacetate with acetyl CoA to form citrate. Citrate acts as the substrate for aconitase and is converted into aconitic acid.
The cycle ends with regeneration of oxaloacetate. This series
A cuvette is a small tube-like container with straight sides and a circular or square cross section. It is sealed at one end, made of a clear, transparent material such as plastic, glass, or fused quartz. Cuvettes are designed to hold samples for spectroscopic measurement, where a beam of light is passed through the sample within the cuvette to measure the absorbance, fluorescence intensity, fluorescence polarization, or fluorescence lifetime of the sample; this measurement is done with a spectrophotometer. Traditional ultraviolet–visible spectroscopy or fluorescence spectroscopy uses samples that are liquid; the sample is a solution, with the substance of interest dissolved within. The sample is placed in a cuvette and the cuvette is placed in a spectrophotometer for testing; the cuvette can be made of any material, transparent in the range of wavelengths used in the test. The smallest cuvettes can hold 70 microliters; the width determines the length of the light path through the sample, which affects the calculation of the absorbance value.
Many cuvettes have a light path of 10 mm, which simplifies calculation of the coefficient of absorption. Most cuvettes have two transparent sides opposite one another so the spectrophotometer light can pass through, although some tests use reflection so only need a single transparent side. For fluorescence measurements, two more transparent sides, at right angles to those used for the spectrophotometer light, are needed for the excitation light; some cuvettes have a glass or plastic cap for use with hazardous solutions, or to protect samples from air. Scratches on the sides of the cuvette the light passes through scatter cause errors. A rubber or plastic rack protects the cuvette from accidentally hitting and being scratched by the machine casing; the solvent and temperature can affect measurements. Cuvettes to be used in circular dichroism experiments should never be mechanically stressed, as the stress will induce birefringence in the quartz and affect measurements. Fingerprints and droplets of water disrupt light rays during measurement, so low-lint gauze or cloth may be used to wipe clean the outer surface of a cuvette before use.
Paper towel or similar may scratch the cuvette. Mild detergent or ethanol may be applied, followed by rinsing with tap water. Acid and alkali are avoided due to their corrosive effects on glass, acetone is unsuitable when working with plastic cuvettes. If solution is transferred into a cuvette using a Pasteur pipette containing air, bubbles may form inside the cuvette, reducing the purity of a solution and scattering light beams; the finger-clad finger method is used to remove bubbles. The solution contained in the cuvette should be high enough to be in the path of the light source. In case the sample needs incubation at a high temperature, care must be taken to avoid temperatures too hot for the cuvette. Reusable quartz cuvettes were required for measurements in the ultraviolet range, because glass and most plastics absorb ultraviolet light, creating interference. Today there are disposable plastic cuvettes made of specialized plastics that are transparent to ultraviolet light. Glass and quartz cuvettes are all suitable for measurements made at longer wavelengths, such as in the visible light range.
"Tandem cuvettes" have a glass barrier medium that extends two-thirds of the way up in the middle, so that measurements can be taken with two solutions separated and again when they are mixed. Plastic cuvettes are used in fast spectroscopic assays, where high speed is more important than high accuracy. Plastic cuvettes with a usable wavelength range of 380–780 nm may be disposed of after use, preventing contamination from re-use, they are cheap to purchase. Disposable cuvettes can be used in some laboratories where the beam light is not high enough to affect the absorption tolerance and consistency of the value. Crown glass has an optimal wavelength range of 340–2500 nm. Glass cuvettes are for use in the wavelength range of visible light, whereas fused quartz tends to be used for ultraviolet applications. Quartz cells provide more durability than glass. Quartz excels at transmitting UV light, can be used for wavelengths ranging from 190 to 2500 nm. Fused quartz cells are used for wavelengths below i.e. ultraviolet light.
IR quartz has a usable wavelength range of 220 to 3,500 nm. It is more resistant to chemical attack from the sample solution than other types designed for fluorescence measurements. Sapphire cuvettes are the most expensive, though provide the most durable, scratch-resistant, transmissible material; the transmission extends from UV light to mid-infrared, ranging from 250 to 5,000 nm. Sapphire can withstand the extreme natural condition of some sample solutions and variances in temperature. In 1934, James Franklin Hyde created a combined silica cell, free from other extraneous elements, as a liquefying technique of other glass products. In the 1950s, Starna Ltd. improved the method to melt a segment of glass using heat without deforming its shape. This innovation has altered the production of inert cuvettes without any thermosetting resin. Before the rectangular cuvette was created, ordinary test tubes were used; as innovation motivated changes in technique, cuvettes were constructed to have focal points over ordinary test tubes.
Calibration curve Spectrophotometry Handbooks Standard Practice for Describing and Measuring Performance of Ultraviolet and Near-Infrared Spectrophotometers
A microscope is an instrument used to see objects that are too small to be seen by the naked eye. Microscopy is the science of investigating small structures using such an instrument. Microscopic means invisible to the eye. There are many types of microscopes, they may be grouped in different ways. One way is to describe the way the instruments interact with a sample to create images, either by sending a beam of light or electrons to a sample in its optical path, or by scanning across, a short distance from the surface of a sample using a probe; the most common microscope is the optical microscope, which uses light to pass through a sample to produce an image. Other major types of microscopes are the fluorescence microscope, the electron microscope and the various types of scanning probe microscopes. Although objects resembling lenses date back 4000 years and there are Greek accounts of the optical properties of water-filled spheres followed by many centuries of writings on optics, the earliest known use of simple microscopes dates back to the widespread use of lenses in eyeglasses in the 13th century.
The earliest known examples of compound microscopes, which combine an objective lens near the specimen with an eyepiece to view a real image, appeared in Europe around 1620. The inventor is unknown. Several revolve around the spectacle-making centers in the Netherlands including claims it was invented in 1590 by Zacharias Janssen and/or Zacharias' father, Hans Martens, claims it was invented by their neighbor and rival spectacle maker, Hans Lippershey, claims it was invented by expatriate Cornelis Drebbel, noted to have a version in London in 1619. Galileo Galilei seems to have found after 1610 that he could close focus his telescope to view small objects and, after seeing a compound microscope built by Drebbel exhibited in Rome in 1624, built his own improved version. Giovanni Faber coined the name microscope for the compound microscope Galileo submitted to the Accademia dei Lincei in 1625; the first detailed account of the microscopic anatomy of organic tissue based on the use of a microscope did not appear until 1644, in Giambattista Odierna's L'occhio della mosca, or The Fly's Eye.
The microscope was still a novelty until the 1660s and 1670s when naturalists in Italy, the Netherlands and England began using them to study biology. Italian scientist Marcello Malpighi, called the father of histology by some historians of biology, began his analysis of biological structures with the lungs. Robert Hooke's Micrographia had a huge impact because of its impressive illustrations. A significant contribution came from Antonie van Leeuwenhoek who achieved up to 300 times magnification using a simple single lens microscope, he sandwiched a small glass ball lens between the holes in two metal plates riveted together, with an adjustable-by-screws needle attached to mount the specimen. Van Leeuwenhoek re-discovered red blood cells and spermatozoa, helped popularise the use of microscopes to view biological ultrastructure. On 9 October 1676, van Leeuwenhoek reported the discovery of micro-organisms; the performance of a light microscope depends on the quality and correct use of the condensor lens system to focus light on the specimen and the objective lens to capture the light from the specimen and form an image.
Early instruments were limited until this principle was appreciated and developed from the late 19th to early 20th century, until electric lamps were available as light sources. In 1893 August Köhler developed a key principle of sample illumination, Köhler illumination, central to achieving the theoretical limits of resolution for the light microscope; this method of sample illumination produces lighting and overcomes the limited contrast and resolution imposed by early techniques of sample illumination. Further developments in sample illumination came from the discovery of phase contrast by Frits Zernike in 1953, differential interference contrast illumination by Georges Nomarski in 1955. In the early 20th century a significant alternative to the light microscope was developed, an instrument that uses a beam of electrons rather than light to generate an image; the German physicist, Ernst Ruska, working with electrical engineer Max Knoll, developed the first prototype electron microscope in 1931, a transmission electron microscope.
The transmission electron microscope works on similar principles to an optical microscope but uses electrons in the place of light and electromagnets in the place of glass lenses. Use of electrons, instead of light, allows for much higher resolution. Development of the transmission electron microscope was followed in 1935 by the development of the scanning electron microscope by Max Knoll. Although TEMs were being used for research before WWII, became popular afterwards, the SEM was not commercially available until 1965. Transmission electron microscopes became popular following the Second World War. Ernst Ruska, working at Siemens, developed the first commercial transmission electron microscope and, in the 1950s, major scientific conferences on electron microscopy started being held. In 1965, the first commercial scanning electron microscope was developed by Profess
A medical guideline is a document with the aim of guiding decisions and criteria regarding diagnosis and treatment in specific areas of healthcare. Such documents have been in use for thousands of years during the entire history of medicine. However, in contrast to previous approaches, which were based on tradition or authority, modern medical guidelines are based on an examination of current evidence within the paradigm of evidence-based medicine, they include summarized consensus statements on best practice in healthcare. A healthcare provider is obliged to know the medical guidelines of his or her profession, has to decide whether to follow the recommendations of a guideline for an individual treatment. Modern clinical guidelines identify and evaluate the highest quality evidence and most current data about prevention, prognosis, therapy including dosage of medications, risk/benefit and cost-effectiveness, they define the most important questions related to clinical practice and identify all possible decision options and their outcomes.
Some guidelines contain computation algorithms to be followed. Thus, they integrate the identified decision points and respective courses of action with the clinical judgement and experience of practitioners. Many guidelines place the treatment alternatives into classes to help providers in deciding which treatment to use. Additional objectives of clinical guidelines are to standardize medical care, to raise quality of care, to reduce several kinds of risk and to achieve the best balance between cost and medical parameters such as effectiveness, sensitivity, etc, it has been demonstrated that the use of guidelines by healthcare providers such as hospitals is an effective way of achieving the objectives listed above, although they are not the only ones. Guidelines are produced at national or international levels by medical associations or governmental bodies, such as the United States Agency for Healthcare Research and Quality. Local healthcare providers may produce their own set of guidelines or adapt them from existing top-level guidelines.
Special computer software packages known as guideline execution engines have been developed to facilitate the use of medical guidelines in concert with an electronic medical record system. The Guideline Interchange Format is a computer representation format for clinical guidelines that can be used with such engines; the USA and other countries maintain medical guideline clearinghouses. In the USA, the National Guideline Clearinghouse maintains a catalog of high-quality guidelines published by various health and medical associations. In the United Kingdom, clinical practice guidelines are published by the National Institute for Health and Care Excellence. In The Netherlands, two bodies—the Institute for Healthcare Improvement and College of General Practitioners —have published specialist and primary care guidelines, respectively. In Germany, the German Agency for Quality in Medicine coordinates a national program for disease management guidelines. All these organisations are now members of the Guidelines International Network, an international network of organisations and individuals involved in clinical practice guidelines.
Checklists have been used in medical practice to attempt to ensure that clinical practice guidelines are followed. An example is the Surgical Safety Checklist developed for the World Health Organization by Dr. Atul Gawande. According to a meta-analysis after introduction of the checklist mortality dropped by 23% and all complications by 40%, but further high-quality studies are required to make the meta-analysis more robust. In the UK, a study on the implementation of a checklist for provision of medical care to elderly patients admitting to hospital found that the checklist highlighted limitations with frailty assessment in acute care and motivated teams to review routine practices, but that work is needed to understand whether and how checklists can be embedded in complex multidisciplinary care. Guidelines may lose their clinical relevance as they age and newer research emerges. 20% of strong recommendations when based on opinion rather than trials, from practice guidelines may be retracted.
The New York Times reported in 2004 that some simple clinical practice guidelines are not followed to the extent they might be. It has been found that providing a nurse or other medical assistant with a checklist of recommended procedures can result in the attending physician being reminded in a timely manner regarding procedures that might have been overlooked. Guidelines may have conflict of interest; as such, the quality of guidelines may vary especially for guidelines that are published on-line and have not had to follow methodological reporting standards required by reputable clearinghouses. Guidelines may make recommendations. In response to many of these problems with traditional guidelines, the BMJ created a new series of trustworthy guidelines focused on the most pressing medical issues called BMJ Rapid Recommendations; the American Heart Association Guidelines for the Prevention of Infective Endocarditis The BMJ Rapid Recommendation guideline on transcatheter aortic valve implantation versus surgical aortic valve replacement for aortic stenosis.
Clinical formulation Clinical prediction rule Clinical trial protocol Medical algorithm Treatment Guidelines from The Medical Letter British Columbia Medical Guidelines – In Canada, British Columbia's guidelines and protocol
Hemoglobin or haemoglobin, abbreviated Hb or Hgb, is the iron-containing oxygen-transport metalloprotein in the red blood cells of all vertebrates as well as the tissues of some invertebrates. Haemoglobin in the blood carries oxygen from the gills to the rest of the body. There it releases the oxygen to permit aerobic respiration to provide energy to power the functions of the organism in the process called metabolism. A healthy individual has 12 to 16 grams of haemoglobin in every 100 ml of blood. In mammals, the protein makes up about 96% of the red blood cells' dry content, around 35% of the total content. Haemoglobin has an oxygen-binding capacity of 1.34 mL O2 per gram, which increases the total blood oxygen capacity seventy-fold compared to dissolved oxygen in blood. The mammalian hemoglobin molecule can bind up to four oxygen molecules. Hemoglobin is involved in the transport of other gases: It carries some of the body's respiratory carbon dioxide as carbaminohemoglobin, in which CO2 is bound to the heme protein.
The molecule carries the important regulatory molecule nitric oxide bound to a globin protein thiol group, releasing it at the same time as oxygen. Haemoglobin is found outside red blood cells and their progenitor lines. Other cells that contain haemoglobin include the A9 dopaminergic neurons in the substantia nigra, alveolar cells, retinal pigment epithelium, mesangial cells in the kidney, endometrial cells, cervical cells and vaginal epithelial cells. In these tissues, haemoglobin has a non-oxygen-carrying function as an antioxidant and a regulator of iron metabolism. Haemoglobin and haemoglobin-like molecules are found in many invertebrates and plants. In these organisms, haemoglobins may carry oxygen, or they may act to transport and regulate other small molecules and ions such as carbon dioxide, nitric oxide, hydrogen sulfide and sulfide. A variant of the molecule, called leghaemoglobin, is used to scavenge oxygen away from anaerobic systems, such as the nitrogen-fixing nodules of leguminous plants, before the oxygen can poison the system.
In 1825 J. F. Engelhard discovered that the ratio of iron to protein is identical in the hemoglobins of several species. From the known atomic mass of iron he calculated the molecular mass of hemoglobin to n × 16000, the first determination of a protein's molecular mass; this "hasty conclusion" drew a lot of ridicule at the time from scientists who could not believe that any molecule could be that big. Gilbert Smithson Adair confirmed Engelhard's results in 1925 by measuring the osmotic pressure of hemoglobin solutions; the oxygen-carrying property of hemoglobin was discovered by Hünefeld in 1840. In 1851, German physiologist Otto Funke published a series of articles in which he described growing hemoglobin crystals by successively diluting red blood cells with a solvent such as pure water, alcohol or ether, followed by slow evaporation of the solvent from the resulting protein solution. Hemoglobin's reversible oxygenation was described a few years by Felix Hoppe-Seyler. In 1959, Max Perutz determined the molecular structure of hemoglobin by X-ray crystallography.
This work resulted in his sharing with John Kendrew the 1962 Nobel Prize in Chemistry for their studies of the structures of globular proteins. The role of hemoglobin in the blood was elucidated by French physiologist Claude Bernard; the name hemoglobin is derived from the words heme and globin, reflecting the fact that each subunit of hemoglobin is a globular protein with an embedded heme group. Each heme group contains one iron atom, that can bind one oxygen molecule through ion-induced dipole forces; the most common type of hemoglobin in mammals contains four such subunits. Hemoglobin consists of protein subunits, these proteins, in turn, are folded chains of a large number of different amino acids called polypeptides; the amino acid sequence of any polypeptide created by a cell is in turn determined by the stretches of DNA called genes. In all proteins, it is the amino acid sequence that determines the protein's chemical properties and function. There is more than one hemoglobin gene: in humans, hemoglobin A is coded for by the genes, HBA1, HBA2, HBB.
The amino acid sequences of the globin proteins in hemoglobins differ between species. These differences grow with evolutionary distance between species. For example, the most common hemoglobin sequences in humans and chimpanzees are nearly identical, differing by only one amino acid in both the alpha and the beta globin protein chains; these differences grow larger between less related species. Within a species, different variants of hemoglobin always exist, although one sequence is a "most common" one in each species. Mutations in the genes for the hemoglobin protein in a species result in hemoglobin variants. Many of these mutant forms of hemoglobin cause no disease; some of these mutant forms of hemoglobin, cause a group of hereditary diseases termed the hemoglobinopathies. The best known hemoglobinopathy is sickle-cell disease, the first human disease whose mechanism was understood at the molecular level. A separate set of diseases called thalassemias involves underproduction of normal and sometimes abnormal hemoglobins, through problems and mutations in globin gene regulation.
All these diseases produce anemia. Variations in hemoglobin amino acid sequences, as with other proteins, may be adaptive. For example, hemoglobin has been found to adapt in different ways to
Phlebotomy is the process of making an incision in a vein with a needle. The procedure itself is known as a venipuncture. A person who performs phlebotomy is called a "phlebotomist", although doctors, medical laboratory scientists and others do portions of phlebotomy procedures in many countries. Phlebotomists are people trained to draw blood from a patient for clinical or medical testing, donations, or research. Phlebotomists collect blood by performing venipunctures. Blood may be collected from infants by means of a heel stick; the duties of a phlebotomist may include properly identifying the patient, interpreting the tests requested on the requisition, drawing blood into the correct tubes with the proper additives explaining the procedure to the patients, preparing patients accordingly, practising the required forms of asepsis, practising standard and universal precautions, performing the skin/vein puncture, withdrawing blood into containers or tubes, restoring hemostasis of the puncture site, instructing patients on post-puncture care, ordering tests per the doctor's requisition, affixing tubes with electronically printed labels, delivering specimens to a laboratory.
Some countries, states, or districts require that phlebotomy personnel be registered. In Australia, there are a number of courses in phlebotomy offered by educational institutions, but training is provided on the job; the minimum primary qualification for phlebotomists in Australia is a Certificate III in Pathology Collection from an approved educational institution. In the UK there is no requirement for holding a formal qualification or certification prior to becoming a phlebotomist as training is provided on the job; the NHS offers training with formal certification upon completion. Special state certification in the United States is required only in four states: California, Washington and Louisiana. A phlebotomist can become nationally certified through many different organizations. However, California only accepts national certificates from six agencies; these include: American Certification Agency, American Medical Technologists, American Society for Clinical Pathology, National Center for Competency Testing/Multi-skilled Medical Certification Institute, National Credentialing Agency, National Healthcareer Association, National Phlebotomy Certification Examination.
These and other agencies certify phlebotomists outside the state of California. To qualify to sit for an examination, candidates must complete a full phlebotomy course and provide documentation of clinical or laboratory experience. Early "phlebotomists" used techniques such as leeches and incision to extract blood from the body. Bloodletting was used as a therapeutic as well as a prophylactic process, thought to remove toxins from the body and to balance the humours. While physicians did perform bloodletting, it was a specialty of barber surgeons, the primary provider of health care to most people in the medieval and early modern eras. Cytotechnologist Injection Medical technologist Venipuncture List of surgeries by type