A fossil fuel is a fuel formed by natural processes, such as anaerobic decomposition of buried dead organisms, containing energy originating in ancient photosynthesis. The age of the organisms and their resulting fossil fuels is millions of years, sometimes exceeds 650 million years. Fossil fuels contain high percentages of carbon and include petroleum and natural gas. Other used derivatives include kerosene and propane. Fossil fuels range from volatile materials with low carbon to hydrogen ratios like methane, to liquids like petroleum, to nonvolatile materials composed of pure carbon, like anthracite coal. Methane can be found in hydrocarbon fields either alone, associated with oil, or in the form of methane clathrates; the theory that fossil fuels formed from the fossilized remains of dead plants by exposure to heat and pressure in the Earth's crust over millions of years was first introduced by Andreas Libavius "in his 1597 Alchemia " and by Mikhail Lomonosov "as early as 1757 and by 1763".
The first use of the term "fossil fuel" was by the German chemist Caspar Neumann, in English translation in 1759. In 2017 the world's primary energy sources consisted of petroleum, natural gas, amounting to an 85% share for fossil fuels in primary energy consumption in the world. Non-fossil sources in 2006 included nuclear and others amounting to 0.9%. World energy consumption was growing at about 2.3% per year. In 2015 about 18% of worldwide consumption was from renewable sources. Although fossil fuels are continually being formed via natural processes, they are considered to be non-renewable resources because they take millions of years to form and the known viable reserves are being depleted much faster than new ones are being made; the use of fossil fuels raises serious environmental concerns. The burning of fossil fuels produces around 21.3 billion tonnes of carbon dioxide per year. It is estimated that natural processes can only absorb about half of that amount, so there is a net increase of 10.65 billion tonnes of atmospheric carbon dioxide per year.
Carbon dioxide is a greenhouse gas that increases radiative forcing and contributes to global warming. A global movement towards the generation of low-carbon renewable energy is underway to help reduce global greenhouse gas emissions. Aquatic phytoplankton and zooplankton that died and sedimented in large quantities under anoxic conditions millions of years ago began forming petroleum and natural gas as a result of anaerobic decomposition. Over geological time this organic matter, mixed with mud, became buried under further heavy layers of inorganic sediment; the resulting high levels of heat and pressure caused the organic matter to chemically alter, first into a waxy material known as kerogen, found in oil shales, with more heat into liquid and gaseous hydrocarbons in a process known as catagenesis. Despite these heat driven transformations, the embedded energy is still photosynthetic in origin. Terrestrial plants, on the other hand, tended to form methane. Many of the coal fields date to the Carboniferous period of Earth's history.
Terrestrial plants form type III kerogen, a source of natural gas. There is a wide range of organic, or hydrocarbon, compounds in any given fuel mixture; the specific mixture of hydrocarbons gives a fuel its characteristic properties, such as boiling point, melting point, viscosity, etc. Some fuels like natural gas, for instance, contain only low boiling, gaseous components. Others such as gasoline or diesel contain much higher boiling components. Fossil fuels are of great importance because they can be burned, producing significant amounts of energy per unit mass; the use of coal as a fuel predates recorded history. Coal was used to run furnaces for the melting of metal ore. Semi-solid hydrocarbons from seeps were burned in ancient times, but these materials were used for waterproofing and embalming. Commercial exploitation of petroleum began in the 19th century to replace oils from animal sources for use in oil lamps. Natural gas, once flared-off as an unneeded byproduct of petroleum production, is now considered a valuable resource.
Natural gas deposits are the main source of the element helium. Heavy crude oil, much more viscous than conventional crude oil, oil sands, where bitumen is found mixed with sand and clay, began to become more important as sources of fossil fuel as of the early 2000s. Oil shale and similar materials are sedimentary rocks containing kerogen, a complex mixture of high-molecular weight organic compounds, which yield synthetic crude oil when heated; these materials have yet to be exploited commercially. With additional processing, they can be employed in lieu of other established fossil fuel deposits. More there has been disinvestment from exploitation of such resources due to their high carbon cost, relative to more processed reserves. Prior to the latter half of the 18th century and watermills provided the energy needed for industry such as milling flour, sawing wood or pumping water, burning wood or peat provided domestic heat; the widescale use of fossil fuels, coal at first and petroleum to fire steam engines enabled the Industrial Revolution.
At the same time, gas lights using natural gas or coal gas were coming into wide use. The invention of the internal combustion engine and its use in automobiles and trucks increased the demand for gasoline and diesel oil, both made from fossil fuels. Other forms of
Nuclear power is the use of nuclear reactions that release nuclear energy to generate heat, which most is used in steam turbines to produce electricity in a nuclear power plant. As a nuclear technology, nuclear power can be obtained from nuclear fission, nuclear decay and nuclear fusion reactions. Presently, the vast majority of electricity from nuclear power is produced by nuclear fission of uranium and plutonium. Nuclear decay processes are used in niche applications such as radioisotope thermoelectric generators. Generating electricity from fusion power remains at the focus of international research; this article deals with nuclear fission power for electricity generation. Civilian nuclear power supplied 2,488 terawatt hours of electricity in 2017, equivalent to about 10% of global electricity generation; as of April 2018, there are 449 civilian fission reactors in the world, with a combined electrical capacity of 394 gigawatt. As of 2018, there are 58 power reactors under construction and 154 reactors planned, with a combined capacity of 63 GW and 157 GW, respectively.
As of January 2019, 337 more reactors were proposed. Most reactors under construction are generation III reactors in Asia. Nuclear power is classified as a low greenhouse gas energy supply technology, along with renewable energy, by the Intergovernmental Panel on Climate Change. Since its commercialization in the 1970s, nuclear power has prevented about 1.84 million air pollution-related deaths and the emission of about 64 billion tonnes of carbon dioxide equivalent that would have otherwise resulted from the burning of fossil fuels. There is a debate about nuclear power. Proponents, such as the World Nuclear Association and Environmentalists for Nuclear Energy, contend that nuclear power is a safe, sustainable energy source that reduces carbon emissions. Opponents, such as Greenpeace and NIRS, contend that nuclear power poses many threats to people and the environment. Accidents in nuclear power plants include the Chernobyl disaster in the Soviet Union in 1986, the Fukushima Daiichi nuclear disaster in Japan in 2011, the more contained Three Mile Island accident in the United States in 1979.
There have been some nuclear submarine accidents. Nuclear reactors have caused the lowest number of fatalities per unit of energy generated when compared to fossil fuels and hydropower. Coal, natural gas and hydroelectricity each have caused a greater number of fatalities per unit of energy, due to air pollution and accidents. Collaboration on research and development towards greater efficiency and recycling of spent fuel in future generation IV reactors presently includes Euratom and the co-operation of more than 10 permanent member countries globally. In 1932 physicist Ernest Rutherford discovered that when lithium atoms were "split" by protons from a proton accelerator, immense amounts of energy were released in accordance with the principle of mass–energy equivalence. However, he and other nuclear physics pioneers Niels Bohr and Albert Einstein believed harnessing the power of the atom for practical purposes anytime in the near future was unlikely; the same year, his doctoral student James Chadwick discovered the neutron, recognized as a potential tool for nuclear experimentation because of its lack of an electric charge.
Experiments bombarding materials with neutrons led Frédéric and Irène Joliot-Curie to discover induced radioactivity in 1934, which allowed the creation of radium-like elements. Further work by Enrico Fermi in the 1930s focused on using slow neutrons to increase the effectiveness of induced radioactivity. Experiments bombarding uranium with neutrons led Fermi to believe he had created a new, transuranic element, dubbed hesperium. In 1938, German chemists Otto Hahn and Fritz Strassmann, along with Austrian physicist Lise Meitner and Meitner's nephew, Otto Robert Frisch, conducted experiments with the products of neutron-bombarded uranium, as a means of further investigating Fermi's claims, they determined that the tiny neutron split the nucleus of the massive uranium atoms into two equal pieces, contradicting Fermi. This was an surprising result: all other forms of nuclear decay involved only small changes to the mass of the nucleus, whereas this process—dubbed "fission" as a reference to biology—involved a complete rupture of the nucleus.
Numerous scientists, including Leó Szilárd, one of the first, recognized that if fission reactions released additional neutrons, a self-sustaining nuclear chain reaction could result. Once this was experimentally confirmed and announced by Frédéric Joliot-Curie in 1939, scientists in many countries petitioned their governments for support of nuclear fission research, just on the cusp of World War II, for the development of a nuclear weapon. In the United States, where Fermi and Szilárd had both emigrated, the discovery of the nuclear chain reaction led to the creation of the first man-made reactor, the research reactor known as Chicago Pile-1, which achieved self-sustaining power/criticality on December 2, 1942; the reactor's development was part of the Manhattan Project, the Allied effort to create atomic bombs during World War II. It led to the building of larger single-purpose production reactors, such as the X-10 Pile, for the production of weapons-grade plutonium for use in the first nuclear weapons.
The United States tested the first nuclear weapon in July 1945, the Trinity test, with the atomic bombings of Hiroshima and Nagasaki taking place one month later. In August 1945, the first distributed account of nuclear energy, in the form of the pocketbook The Atomic Age, discussed the peaceful future uses of nuclear energy and depicted a future where fo
Hydroelectricity is electricity produced from hydropower. In 2015, hydropower generated 16.6% of the world's total electricity and 70% of all renewable electricity, was expected to increase about 3.1% each year for the next 25 years. Hydropower is produced in 150 countries, with the Asia-Pacific region generating 33 percent of global hydropower in 2013. China is the largest hydroelectricity producer, with 920 TWh of production in 2013, representing 16.9 percent of domestic electricity use. The cost of hydroelectricity is low, making it a competitive source of renewable electricity; the hydro station consumes no water, unlike gas plants. The average cost of electricity from a hydro station larger than 10 megawatts is 3 to 5 U. S. cents per kilowatt hour. With a dam and reservoir it is a flexible source of electricity since the amount produced by the station can be varied up or down rapidly to adapt to changing energy demands. Once a hydroelectric complex is constructed, the project produces no direct waste, in many cases, has a lower output level of greenhouse gases than fossil fuel powered energy plants.
Hydropower has been used since ancient times to perform other tasks. In the mid-1770s, French engineer Bernard Forest de Bélidor published Architecture Hydraulique which described vertical- and horizontal-axis hydraulic machines. By the late 19th century, the electrical generator was developed and could now be coupled with hydraulics; the growing demand for the Industrial Revolution would drive development as well. In 1878 the world's first hydroelectric power scheme was developed at Cragside in Northumberland, England by William Armstrong, it was used to power a single arc lamp in his art gallery. The old Schoelkopf Power Station No. 1 near Niagara Falls in the U. S. side began to produce electricity in 1881. The first Edison hydroelectric power station, the Vulcan Street Plant, began operating September 30, 1882, in Appleton, with an output of about 12.5 kilowatts. By 1886 there were 45 hydroelectric power stations in the U. S. and Canada. By 1889 there were 200 in the U. S. alone. At the beginning of the 20th century, many small hydroelectric power stations were being constructed by commercial companies in mountains near metropolitan areas.
Grenoble, France held the International Exhibition of Hydropower and Tourism with over one million visitors. By 1920 as 40% of the power produced in the United States was hydroelectric, the Federal Power Act was enacted into law; the Act created the Federal Power Commission to regulate hydroelectric power stations on federal land and water. As the power stations became larger, their associated dams developed additional purposes to include flood control and navigation. Federal funding became necessary for large-scale development and federally owned corporations, such as the Tennessee Valley Authority and the Bonneville Power Administration were created. Additionally, the Bureau of Reclamation which had begun a series of western U. S. irrigation projects in the early 20th century was now constructing large hydroelectric projects such as the 1928 Hoover Dam. The U. S. Army Corps of Engineers was involved in hydroelectric development, completing the Bonneville Dam in 1937 and being recognized by the Flood Control Act of 1936 as the premier federal flood control agency.
Hydroelectric power stations continued to become larger throughout the 20th century. Hydropower was referred to as white coal for its plenty. Hoover Dam's initial 1,345 MW power station was the world's largest hydroelectric power station in 1936; the Itaipu Dam opened in 1984 in South America as the largest, producing 14,000 MW but was surpassed in 2008 by the Three Gorges Dam in China at 22,500 MW. Hydroelectricity would supply some countries, including Norway, Democratic Republic of the Congo and Brazil, with over 85% of their electricity; the United States has over 2,000 hydroelectric power stations that supply 6.4% of its total electrical production output, 49% of its renewable electricity. The technical potential for hydropower development around the world is much greater than the actual production: the percent of potential hydropower capacity that has not been developed is 71% in Europe, 75% in North America, 79% in South America, 95% in Africa, 95% in the Middle East, 82% in Asia-Pacific.
The political realities of new reservoirs in western countries, economic limitations in the third world and the lack of a transmission system in undeveloped areas result in the possibility of developing 25% of the remaining technically exploitable potential before 2050, with the bulk of that being in the Asia-Pacific area. Some countries have developed their hydropower potential and have little room for growth: Switzerland produces 88% of its potential and Mexico 80%. Most hydroelectric power comes from the potential energy of dammed water driving a water turbine and generator; the power extracted from the water depends on the volume and on the difference in height between the source and the water's outflow. This height difference is called the head. A large pipe delivers water from the reservoir to the turbine; this method produces electricity to supply high peak demands by moving water between reservoirs at different elevations. At times of low electrical demand, the excess generation capacity is used to pump water into the higher reservoir.
When the demand becomes greater, water is released back into the lower reservoir through a turbine. Pumped-storage schemes provide the most commercially important means of large-scale grid energy storage and improve the daily capacity factor of the generation system. Pumped storag
Economy of New Caledonia
New Caledonia is a major source for nickel and contains 10% of the worlds known nickel supply. The islands contain about 7,100,000 tonnes of nickel. With the annual production of about 107,000 tonnes in 2009, New Caledonia was the world's fifth largest producer after Russia, Indonesia and Australia. In recent years, the economy has suffered because of depressed international demand for nickel, due to the ongoing global financial crisis. Only a negligible amount of the land is suitable for cultivation, food accounts for about 20% of imports. In addition to nickel, the substantial financial support from France and tourism are keys to the health of the economy. In the 2000s, large additions were made to nickel mining capacity; the Goro Nickel Plant is expected to be one of the largest nickel producing plants on Earth. When full-scale production begins in 2013 this plant will produce an estimated 20% of the global nickel supply. However, the need to respond to environmental concerns over the country's globally recognized ecological heritage, may need to be factored into capitalization of mining operations.
The GDP of New Caledonia in 2007 was 8.8 billion US dollars at market exchange rates, the fourth-largest economy in Oceania after Australia, New Zealand, Hawaii. The GDP per capita was 36,376 US dollars in 2007, lower than in Australia and Hawaii, but higher than in New Zealand. In 2007, exports from New Caledonia amounted to 2.11 billion US dollars, 96.3% of which were mineral products and alloys. Imports amounted to 2.88 billion US dollars. 26.6% of imports came from Metropolitan France, 16.1% from other European countries, 13.6% from Singapore, 10.7% from Australia, 4.0% from New Zealand, 3.2% from the United States, 3.0% from China, 3.0% from Japan, 22.7% from other countries. As of 2007, about 200 Japanese couples travel to New Caledonia each year for their wedding and honeymoon. Oceania Flash reported in 2007 that one company planned to build a new wedding chapel to accommodate Japanese weddings to supplement the Le Meridien Resort in Nouméa. New Caledonia is a popular destination for groups of Australian high school students who are studying French.
Economy of France in: French Guiana, French Polynesia, Martinique, New Caledonia, Réunion, Saint Barthélemy, Saint Martin, Saint Pierre and Miquelon and Futuna Taxation in France Economic history of France Poverty in France
A currency, in the most specific sense is money in any form when in use or circulation as a medium of exchange circulating banknotes and coins. A more general definition is that a currency is a system of money in common use for people in a nation. Under this definition, US dollars, pounds sterling, Australian dollars, European euros, Russian rubles and Indian Rupees are examples of currency; these various currencies are recognized as stores of value and are traded between nations in foreign exchange markets, which determine the relative values of the different currencies. Currencies in this sense are defined by governments, each type has limited boundaries of acceptance. Other definitions of the term "currency" are discussed in their respective synonymous articles banknote and money; the latter definition, pertaining to the currency systems of nations, is the topic of this article. Currencies can be classified into two monetary systems: fiat money and commodity money, depending on what guarantees the currency's value.
Some currencies are legal tender in certain political jurisdictions. Others are traded for their economic value. Digital currency has arisen with the popularity of the Internet. Money was a form of receipt, representing grain stored in temple granaries in Sumer in ancient Mesopotamia and in Ancient Egypt. In this first stage of currency, metals were used as symbols to represent value stored in the form of commodities; this formed the basis of trade in the Fertile Crescent for over 1500 years. However, the collapse of the Near Eastern trading system pointed to a flaw: in an era where there was no place, safe to store value, the value of a circulating medium could only be as sound as the forces that defended that store. A trade could only reach as far as the credibility of that military. By the late Bronze Age, however, a series of treaties had established safe passage for merchants around the Eastern Mediterranean, spreading from Minoan Crete and Mycenae in the northwest to Elam and Bahrain in the southeast.
It is not known what was used as a currency for these exchanges, but it is thought that ox-hide shaped ingots of copper, produced in Cyprus, may have functioned as a currency. It is thought that the increase in piracy and raiding associated with the Bronze Age collapse produced by the Peoples of the Sea, brought the trading system of oxhide ingots to an end, it was only the recovery of Phoenician trade in the 10th and 9th centuries BC that led to a return to prosperity, the appearance of real coinage first in Anatolia with Croesus of Lydia and subsequently with the Greeks and Persians. In Africa, many forms of value store have been used, including beads, ivory, various forms of weapons, the manilla currency, ochre and other earth oxides; the manilla rings of West Africa were one of the currencies used from the 15th century onwards to sell slaves. African currency is still notable for its variety, in many places, various forms of barter still apply; these factors led to the metal itself being the store of value: first silver both silver and gold, at one point bronze.
Now we have other non-precious metals as coins. Metals were mined and stamped into coins; this was to assure the individual accepting the coin that he was getting a certain known weight of precious metal. Coins could be counterfeited, but the existence of standard coins created a new unit of account, which helped lead to banking. Archimedes' principle provided the next link: coins could now be tested for their fine weight of metal, thus the value of a coin could be determined if it had been shaved, debased or otherwise tampered with. Most major economies using coinage had several tiers of coins of different values, made of copper and gold. Gold coins were the most valuable and were used for large purchases, payment of the military and backing of state activities. Units of account were defined as the value of a particular type of gold coin. Silver coins were used for midsized transactions, sometimes defined a unit of account, while coins of copper or silver, or some mixture of them, might be used for everyday transactions.
This system had been used in ancient India since the time of the Mahajanapadas. The exact ratios between the values of the three metals varied between different eras and places. However, the rarity of gold made it more valuable than silver, silver was worth more than copper. In premodern China, the need for credit and for a medium of exchange, less physically cumbersome than large numbers of copper coins led to the introduction of paper money, i.e. banknotes. Their introduction was a gradual process which lasted from the late Tang dynasty into the Song dynasty, it began as a means for merchants to exchange heavy coinage for receipts of deposit issued as promissory notes by wholesalers' shops. These notes were valid for temporary use in a small regional territory. In the 10th century, the Song dynasty government began to circulate these notes amongst the traders in its monopolized salt industry; the Song government granted several shops the right to issue banknotes, in the early 12th century the government took over these shops to produce state-issued currency.
Yet the banknotes issued w
Electricity is the set of physical phenomena associated with the presence and motion of matter that has a property of electric charge. In early days, electricity was considered as being not related to magnetism. On, many experimental results and the development of Maxwell's equations indicated that both electricity and magnetism are from a single phenomenon: electromagnetism. Various common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others; the presence of an electric charge, which can be either positive or negative, produces an electric field. The movement of electric charges produces a magnetic field; when a charge is placed in a location with a non-zero electric field, a force will act on it. The magnitude of this force is given by Coulomb's law. Thus, if that charge were to move, the electric field would be doing work on the electric charge, thus we can speak of electric potential at a certain point in space, equal to the work done by an external agent in carrying a unit of positive charge from an arbitrarily chosen reference point to that point without any acceleration and is measured in volts.
Electricity is at the heart of many modern technologies, being used for: electric power where electric current is used to energise equipment. Electrical phenomena have been studied since antiquity, though progress in theoretical understanding remained slow until the seventeenth and eighteenth centuries. Practical applications for electricity were few, it would not be until the late nineteenth century that electrical engineers were able to put it to industrial and residential use; the rapid expansion in electrical technology at this time transformed industry and society, becoming a driving force for the Second Industrial Revolution. Electricity's extraordinary versatility means it can be put to an limitless set of applications which include transport, lighting and computation. Electrical power is now the backbone of modern industrial society. Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the "Thunderer of the Nile", described them as the "protectors" of all other fish.
Electric fish were again reported millennia by ancient Greek and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by catfish and electric rays, knew that such shocks could travel along conducting objects. Patients suffering from ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them; the earliest and nearest approach to the discovery of the identity of lightning, electricity from any other source, is to be attributed to the Arabs, who before the 15th century had the Arabic word for lightning ra‘ad applied to the electric ray. Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing.
Thales was incorrect in believing the attraction was due to a magnetic effect, but science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature. Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber, he coined the New Latin word electricus to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay.
In the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature, he explained the paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines used; the recognition of electromagnetism, the unity of electric
Sugar is the generic name for sweet-tasting, soluble carbohydrates, many of which are used in food. The various types of sugar are derived from different sources. Simple sugars are called monosaccharides and include glucose and galactose. "Table sugar" or "granulated sugar" refers to a disaccharide of glucose and fructose. In the body, sucrose is hydrolysed into glucose. Sugars are found in the tissues of most plants, but sucrose is concentrated in sugarcane and sugar beet, making them ideal for efficient commercial extraction to make refined sugar. Sugarcane originated in tropical Indian subcontinent and Southeast Asia, is known of from before 6,000 BP, sugar beet was first described in writing by Olivier de Serres and originated in southwestern and Southeast Europe along the Atlantic coasts and the Mediterranean Sea, in North Africa, Macaronesia, to Western Asia. In 2016, the combined world production of those two crops was about two billion tonnes. Other disaccharides include lactose. Longer chains of sugar molecules are called polysaccharides.
Some other chemical substances, such as glycerol and sugar alcohols, may have a sweet taste, but are not classified as sugar. Sucrose is used in prepared foods, is sometimes added to commercially available beverages, may be used by people as a sweetener for foods and beverages; the average person consumes about 24 kilograms of sugar each year, or 33.1 kilograms in developed countries, equivalent to over 260 food calories per day. As sugar consumption grew in the latter part of the 20th century, researchers began to examine whether a diet high in sugar refined sugar, was damaging to human health. Excessive consumption of sugar has been implicated in the onset of obesity, cardiovascular disease and tooth decay. Numerous studies have tried to clarify those implications, but with varying results because of the difficulty of finding populations for use as controls that consume little or no sugar. In 2015, the World Health Organization recommended that adults and children reduce their intake of free sugars to less than 10%, encouraged a reduction to below 5%, of their total energy intake.
The etymology reflects the spread of the commodity. From Sanskrit शर्करा, meaning "ground or candied sugar," "grit, gravel", came Persian shakar, whence Arabic سكر, whence Medieval Latin succarum, whence 12th-century French sucre, whence the English word sugar. Italian zucchero, Spanish azúcar, Portuguese açúcar came directly from Arabic, the Spanish and Portuguese words retaining the Arabic definite article; the earliest Greek word attested is σάκχαρις. The English word jaggery, a coarse brown sugar made from date palm sap or sugarcane juice, has a similar etymological origin: Portuguese jágara from the Malayalam ചക്കരാ, itself from the Sanskrit शर्करा. Sugar has been produced in the Indian subcontinent since ancient times and its cultivation spread from there into modern-day Afghanistan through the Khyber Pass, it was not plentiful or cheap in early times, in most parts of the world, honey was more used for sweetening. People chewed raw sugarcane to extract its sweetness. Sugarcane was a native of Southeast Asia.
Different species seem to have originated from different locations with Saccharum barberi originating in India and S. edule and S. officinarum coming from New Guinea. One of the earliest historical references to sugarcane is in Chinese manuscripts dating to 8th century BCE, which state that the use of sugarcane originated in India. In the tradition of Indian medicine, the sugarcane is known by the name Ikṣu and the sugarcane juice is known as Phāṇita, its varieties and characterics are defined in nighaṇṭus such as the Bhāvaprakāśa. Sugar remained unimportant until the Indians discovered methods of turning sugarcane juice into granulated crystals that were easier to store and to transport. Crystallized sugar was discovered by the time of the Imperial Guptas, around the 5th century CE. In the local Indian language, these crystals were called khanda, the source of the word candy. Indian sailors, who carried clarified butter and sugar as supplies, introduced knowledge of sugar along the various trade routes they travelled.
Traveling Buddhist monks took sugar crystallization methods to China. During the reign of Harsha in North India, Indian envoys in Tang China taught methods of cultivating sugarcane after Emperor Taizong of Tang made known his interest in sugar. China established its first sugarcane plantations in the seventh century. Chinese documents confirm at least two missions to India, initiated in 647 CE, to obtain technology for sugar refining. In the Indian subcontinent, the Middle East and China, sugar became a staple of cooking and desserts. Nearchus, admiral of Alexander of Macedonia, knew of sugar during the year 325 B. C. because of his participation in the campaign of India led by Alexander. The Greek physician Pedanius Dioscorides in the 1st century CE described sugar in his medical treatise De Materia Medica, Pliny the Elder, a 1st-century CE Roman, described sugar in his Natural History: "Sugar is made in Arabia as well, but Indian sugar is better, it is a kind of honey found in cane, white as gum, it crunches between the teeth.
It comes in lumps the size of a hazelnut. Sugar is used only for medical purposes." Crusaders brought sugar back to Europe after their campaigns in the Hol