Probability is the measure of the likelihood that an event will occur. Probability is quantified as a number between 0 and 1, the higher the probability of an event, the more certain that the event will occur. A simple example is the tossing of a fair coin, since the coin is unbiased, the two outcomes are both equally probable, the probability of head equals the probability of tail. Since no other outcomes are possible, the probability is 1/2 and this type of probability is called a priori probability. Probability theory is used to describe the underlying mechanics and regularities of complex systems. For example, tossing a coin twice will yield head-head, head-tail, tail-head. The probability of getting an outcome of head-head is 1 out of 4 outcomes or 1/4 or 0.25 and this interpretation considers probability to be the relative frequency in the long run of outcomes. A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome, subjectivists assign numbers per subjective probability, i. e. as a degree of belief.
The degree of belief has been interpreted as, the price at which you would buy or sell a bet that pays 1 unit of utility if E,0 if not E. The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as data to produce probabilities. The expert knowledge is represented by some prior probability distribution and these data are incorporated in a likelihood function. The product of the prior and the likelihood, results in a probability distribution that incorporates all the information known to date. The scientific study of probability is a development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, there are reasons of course, for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the study of probability. According to Richard Jeffrey, Before the middle of the century, the term probable meant approvable. A probable action or opinion was one such as people would undertake or hold.
However, in legal contexts especially, probable could apply to propositions for which there was good evidence, the sixteenth century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes
Ernest Rutherford, 1st Baron Rutherford of Nelson, OM, FRS was a New Zealand-born British physicist who came to be known as the father of nuclear physics. Encyclopædia Britannica considers him to be the greatest experimentalist since Michael Faraday and this work was done at McGill University in Canada. Rutherford moved in 1907 to the Victoria University of Manchester in the UK, Rutherford performed his most famous work after he became a Nobel laureate. He conducted research that led to the first splitting of the atom in 1917 in a reaction between nitrogen and alpha particles, in which he discovered the proton. Rutherford became Director of the Cavendish Laboratory at the University of Cambridge in 1919, after his death in 1937, he was honoured by being interred with the greatest scientists of the United Kingdom, near Sir Isaac Newtons tomb in Westminster Abbey. The chemical element rutherfordium was named after him in 1997, Ernest Rutherford was the son of James Rutherford, a farmer, and his wife Martha Thompson, originally from Hornchurch, England.
James had emigrated to New Zealand from Perth, Scotland, to raise a little flax, Ernest was born at Brightwater, near Nelson, New Zealand. His first name was mistakenly spelled Earnest when his birth was registered, Rutherfords mother Martha Thompson was a schoolteacher. He studied at Havelock School and Nelson College and won a scholarship to study at Canterbury College, University of New Zealand, in 1898 Thomson recommended Rutherford for a position at McGill University in Montreal, Canada. He was to replace Hugh Longbourne Callendar who held the chair of Macdonald Professor of physics and was coming to Cambridge, in 1901 he gained a DSc from the University of New Zealand. In 1907 Rutherford returned to Britain to take the chair of physics at the Victoria University of Manchester, during World War I, he worked on a top secret project to solve the practical problems of submarine detection by sonar. In 1916 he was awarded the Hector Memorial Medal, in 1919 he returned to the Cavendish succeeding J. J.
Thomson as the Cavendish professor and Director. Between 1925 and 1930 he served as President of the Royal Society, in 1933, Rutherford was one of the two inaugural recipients of the T. K. Sidey Medal, set up by the Royal Society of New Zealand as an award for outstanding scientific research. For some time before his death, Rutherford had a hernia, which he had neglected to have fixed. Despite an emergency operation in London, he died four days afterwards of what physicians termed intestinal paralysis, after cremation at Golders Green Crematorium, he was given the high honour of burial in Westminster Abbey, near Isaac Newton and other illustrious British scientists. At Cambridge, Rutherford started to work with J. J. Thomson on the effects of X-rays on gases. Hearing of Becquerels experience with uranium, Rutherford started to explore its radioactivity, continuing his research in Canada, he coined the terms alpha ray and beta ray in 1899 to describe the two distinct types of radiation. He discovered that thorium gave off a gas which produced an emanation which was itself radioactive and he found that a sample of this radioactive material of any size invariably took the same amount of time for half the sample to decay – its half-life
The rate law or rate equation for a chemical reaction is an equation that links the reaction rate with the concentrations or pressures of the reactants and constant parameters. For many reactions the rate is given by a law such as r = k x y where and express the concentration of the species A and B. The exponents x and y are the partial orders and must be determined experimentally. The constant k is the rate constant or rate coefficient of the reaction. The value of this coefficient k may depend on such as temperature, ionic strength, surface area of an adsorbent. For elementary reactions, which consist of a step, the order equals the molecularity as predicted by collision theory. For example, an elementary reaction A + B → products will be second order overall and first order in each reactant. For multistep reactions, the order of each step equals the molecularity, the equation may involve a fractional order, and may depend on the concentration of an intermediate species. The rate equation is an equation and can be integrated to obtain an integrated rate equation that links concentrations of reactants or products with time. A zero order reaction has a rate that is independent of the concentration of the reactant, increasing the concentration of the reacting species will not speed up the rate of the reaction i. e. the amount of substance reacted is proportional to the time.
Zero order reactions are found when a material that is required for the reaction to proceed. The rate law for a zero order reaction is r = k where r is the reaction rate, T = − k t +0 where t represents the concentration of the chemical of interest at a particular time, and 0 represents the initial concentration. A reaction is zero order if concentration data are plotted versus time, a plot of t vs. time t gives a straight line with a slope of − k. The half-life of a reaction describes the time needed for half of the reactant to be depleted, a first order reaction depends on the concentration of only one reactant. Other reactants can be present, but each will be zero order, the rate law for a reaction that is first order with respect to a reactant A is − d d t = r = k k is the first order rate constant, which has units of 1/s. The integrated first order rate law is ln = − k t + ln 0 A plot of ln vs. time t gives a line with a slope of − k. The half-life of a first order reaction is independent of the concentration and is given by t 12 = ln k.
This equation indicates that the fraction of the amount of reactant population that will break down in each time period is independent of the initial amount present
Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions. Other forms of matter are studied. Nuclear physics should not be confused with atomic physics, which studies the atom as a whole, discoveries in nuclear physics have led to applications in many fields. Such applications are studied in the field of nuclear engineering, Particle physics evolved out of nuclear physics and the two fields are typically taught in close association. Nuclear astrophysics, the application of physics to astrophysics, is crucial in explaining the inner workings of stars. The discovery of the electron by J. J. Thomson a year was an indication that the atom had internal structure, in the years that followed, radioactivity was extensively investigated, notably by Marie and Pierre Curie as well as by Ernest Rutherford and his collaborators. By the turn of the physicists had discovered three types of radiation emanating from atoms, which they named alpha and gamma radiation.
Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a range of energies, rather than the discrete amounts of energy that were observed in gamma. This was a problem for physics at the time, because it seemed to indicate that energy was not conserved in these decays. The 1903 Nobel Prize in Physics was awarded jointly to Becquerel for his discovery and to Marie, Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his investigations into the disintegration of the elements and the chemistry of radioactive substances. In 1905 Albert Einstein formulated the idea of mass–energy equivalence, in 1906 Ernest Rutherford published Retardation of the α Particle from Radium in passing through matter. Hans Geiger expanded on this work in a communication to the Royal Society with experiments he and Rutherford had done, passing alpha particles through air, aluminum foil and gold leaf.
More work was published in 1909 by Geiger and Ernest Marsden, in 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it. The plum pudding model had predicted that the particles should come out of the foil with their trajectories being at most slightly bent. But Rutherford instructed his team to look for something that shocked him to observe and he likened it to firing a bullet at tissue paper and having it bounce off. As an example, in this model consisted of a nucleus with 14 protons and 7 electrons. The Rutherford model worked well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929
In biology, tissue is a cellular organizational level intermediate between cells and a complete organ. A tissue is an ensemble of similar cells from the origin that together carry out a specific function. Organs are formed by the grouping together of multiple tissues. The study of tissue is known as histology or, in connection with disease, the classical tools for studying tissues are the paraffin block in which tissue is embedded and sectioned, the histological stain, and the optical microscope. In the last couple of decades, developments in microscopy, immunofluorescence. With these tools, the appearances of tissues can be examined in health and disease. Animal tissues are grouped into four types, muscle, nervous. Collections of tissues joined in structural units to serve a common function compose organs, while all animals can generally be considered to contain the four tissue types, the manifestation of these tissues can differ depending on the type of organism. For example, the origin of the cells comprising a particular type may differ developmentally for different classifications of animals.
By contrast, a true epithelial tissue is present only in a layer of cells held together via occluding junctions called tight junctions. This tissue covers all surfaces that come in contact with the external environment such as the skin, the airways. It serves functions of protection and absorption, and is separated from other tissues below by a basal lamina and they are made up of cells separated by non-living material, which is called an extracellular matrix. This matrix can be liquid or rigid, for example, blood contains plasma as its matrix and bones matrix is rigid. Connective tissue gives shape to organs and holds them in place, bone, ligament and areolar tissues are examples of connective tissues. One method of classifying tissues is to divide them into three types, fibrous connective tissue, skeletal connective tissue, and fluid connective tissue. Muscle cells form the active contractile tissue of the known as muscle tissue or muscular tissue. Muscle tissue functions to force and cause motion, either locomotion or movement within internal organs.
Cells comprising the nervous system and peripheral nervous system are classified as nervous tissue
Nondimensionalization is the partial or full removal of units from an equation involving physical quantities by a suitable substitution of variables. This technique can simplify and parameterize problems where measured units are involved and it is closely related to dimensional analysis. In some physical systems, the scaling is used interchangeably with nondimensionalization. These units refer to quantities intrinsic to the system, rather than such as SI units. Nondimensionalization is not the same as converting extensive quantities in an equation to intensive quantities, nondimensionalization can recover characteristic properties of a system. For example, if a system has a resonance frequency, length, or time constant. The technique is useful for systems that can be described by differential equations. One important use is in the analysis of control systems, many illustrative examples of nondimensionalization originate from simplifying differential equations. This is because a body of physical problems can be formulated in terms of differential equations.
An example of an application is dimensional analysis, another example is normalization in statistics. Measuring devices are examples of nondimensionalization occurring in everyday life. Measuring devices are calibrated relative to some known unit, subsequent measurements are made relative to this standard. Then, the value of the measurement is recovered by scaling with respect to the standard. Suppose a pendulum is swinging with a particular period T, for such a system, it is advantageous to perform calculations relating to the swinging relative to T. In some sense, this is normalizing the measurement with respect to the period, measurements made relative to an intrinsic property of a system will apply to other systems which have the same intrinsic property. It allows one to compare a common property of different implementations of the same system, nondimensionalization determines in a systematic manner the characteristic units of a system to use, without relying heavily on prior knowledge of the systems intrinsic properties.
In fact, nondimensionalization can suggest the parameters which should be used for analyzing a system, however, it is necessary to start with an equation that describes the system appropriately. The last three steps are usually specific to the problem where nondimensionalization is applied, almost all systems require the first two steps to be performed
A chemical reaction is a process that leads to the transformation of one set of chemical substances to another. Nuclear chemistry is a sub-discipline of chemistry that involves the reactions of unstable. The substance initially involved in a reaction are called reactants or reagents. Chemical reactions are characterized by a chemical change, and they yield one or more products. Reactions often consist of a sequence of individual sub-steps, the elementary reactions. Chemical reactions are described with chemical equations, which present the starting materials, end products. Chemical reactions happen at a characteristic reaction rate at a given temperature, reaction rates increase with increasing temperature because there is more thermal energy available to reach the activation energy necessary for breaking bonds between atoms. Reactions may proceed in the forward or reverse direction until they go to completion or reach equilibrium, Reactions that proceed in the forward direction to approach equilibrium are often described as spontaneous, requiring no input of free energy to go forward.
Non-spontaneous reactions require input of energy to go forward. Different chemical reactions are used in combinations during chemical synthesis in order to obtain a desired product, in biochemistry, a consecutive series of chemical reactions form metabolic pathways. These reactions are catalyzed by protein enzymes. Chemical reactions such as combustion in fire and the reduction of ores to metals were known since antiquity, in the Middle Ages, chemical transformations were studied by Alchemists. They attempted, in particular, to lead into gold, for which purpose they used reactions of lead. The process involved heating of sulfate and nitrate minerals such as sulfate, alum. In the 17th century, Johann Rudolph Glauber produced hydrochloric acid and sodium sulfate by reacting sulfuric acid, further optimization of sulfuric acid technology resulted in the contact process in the 1880s, and the Haber process was developed in 1909–1910 for ammonia synthesis. From the 16th century, researchers including Jan Baptist van Helmont, Robert Boyle, the phlogiston theory was proposed in 1667 by Johann Joachim Becher.
It postulated the existence of an element called phlogiston, which was contained within combustible bodies. This proved to be false in 1785 by Antoine Lavoisier who found the explanation of the combustion as reaction with oxygen from the air
Starlight is the light emitted by stars. Sunlight is the used for the Suns starlight observed during daytime. During nighttime, albedo describes solar reflections from other Solar System objects including moonlight and measurement of starlight through telescopes is the basis for many fields of astronomy, including photometry and stellar spectroscopy. Hipparchus did not have a telescope or any instrument that could measure apparent brightness accurately and he sorted the stars into six brightness categories, which he called magnitudes. He referred to the brightest stars in his catalog as first-magnitudes stars, which were the brightest stars, starlight is a notable part of personal experience and human culture, impacting a diverse range of pursuits including poetry and military strategy. In contrast to previously developed active infrared system such as sniperscope, it was a passive device, the average color of starlight in the observable universe is a shade of yellowish-white that has been given the name Cosmic Latte.
Starlight spectroscopy, examination of the spectra, was pioneered by Joseph Fraunhofer in 1814. Starlight can be understood to be composed of three main types, continuous spectrum, emission spectrum, and absorption spectrum. One of the oldest stars yet identified was identified in 2014, the starlight shining on Earth would include this star. In the field of photography, there is a specialty of night-time photography especially when subjects are lit primarily by starlight, directly taking images of night sky is a part of astrophotography. Starlight astrophotography can be used for the pursuit of science and/or leisure, starlight photography can be important for observing nocturnal animals. In many cases starlight photography may overlap with a need to understand the impact of moonlight. List of brightest stars Purkinje effect
Pesticides are substances that are meant to control pests or weeds. The most common of these are herbicides which account for approximately 80% of all pesticide use, most pesticides are intended to serve as plant protection products, which in general, protect plants from weeds, fungi, or insects. In general, a pesticide is a chemical or biological agent that deters, kills, or otherwise discourages pests. Target pests can include insects, plant pathogens, mollusks, mammals, nematodes, although pesticides have benefits, some have drawbacks, such as potential toxicity to humans and other species. According to the Stockholm Convention on Persistent Organic Pollutants,9 of the 12 most dangerous, the term includes substances intended for use as a plant growth regulator, desiccant, or agent for thinning fruit or preventing the premature fall of fruit. Also used as substances applied to either before or after harvest to protect the commodity from deterioration during storage. Pesticides can be classified by target organism, chemical structure, biopesticides include microbial pesticides and biochemical pesticides.
Plant-derived pesticides, or botanicals, have been developing quickly and these include the pyrethroids, nicotinoids, and a fourth group that includes strychnine and scilliroside. Many pesticides can be grouped into chemical families, prominent insecticide families include organochlorines and carbamates. Organochlorine hydrocarbons could be separated into dichlorodiphenylethanes, cyclodiene compounds, and they operate by disrupting the sodium/potassium balance of the nerve fiber, forcing the nerve to transmit continuously. Their toxicities vary greatly, but they have phased out because of their persistence. Organophosphate and carbamates largely replaced organochlorines, both operate through inhibiting the enzyme acetylcholinesterase, allowing acetylcholine to transfer nerve impulses indefinitely and causing a variety of symptoms such as weakness or paralysis. Organophosphates are quite toxic to vertebrates, and have in some cases replaced by less toxic carbamates. Thiocarbamate and dithiocarbamates are subclasses of carbamates, prominent families of herbicides include phenoxy and benzoic acid herbicides, triazines and Chloroacetanilides.
Phenoxy compounds tend to selectively kill broad-leaf weeds rather than grasses, the phenoxy and benzoic acid herbicides function similar to plant growth hormones, and grow cells without normal cell division, crushing the plants nutrient transport system. Many commonly used pesticides are not included in these families, including glyphosate, Pesticides can be classified based upon their biological mechanism function or application method. Most pesticides work by poisoning pests, a systemic pesticide moves inside a plant following absorption by the plant. With insecticides and most fungicides, this movement is usually upward and outward, increased efficiency may be a result
Plants are mainly multicellular, predominantly photosynthetic eukaryotes of the kingdom Plantae. The term is generally limited to the green plants, which form an unranked clade Viridiplantae. This includes the plants and other gymnosperms, clubmosses, liverworts and the green algae. Green plants have cell walls containing cellulose and obtain most of their energy from sunlight via photosynthesis by primary chloroplasts and their chloroplasts contain chlorophylls a and b, which gives them their green color. Some plants are parasitic and have lost the ability to produce amounts of chlorophyll or to photosynthesize. Plants are characterized by sexual reproduction and alternation of generations, although reproduction is common. There are about 300–315 thousand species of plants, of which the great majority, green plants provide most of the worlds molecular oxygen and are the basis of most of Earths ecologies, especially on land. Plants that produce grains and vegetables form humankinds basic foodstuffs, Plants play many roles in culture.
They are used as ornaments and, until recently and in variety, they have served as the source of most medicines. The scientific study of plants is known as botany, a branch of biology, Plants are one of the two groups into which all living things were traditionally divided, the other is animals. The division goes back at least as far as Aristotle, who distinguished between plants, which generally do not move, and animals, which often are mobile to catch their food. Much later, when Linnaeus created the basis of the system of scientific classification. Since then, it has become clear that the plant kingdom as originally defined included several unrelated groups, these organisms are still often considered plants, particularly in popular contexts. When the name Plantae or plant is applied to a group of organisms or taxon. The evolutionary history of plants is not yet settled. Those which have been called plants are in bold, the way in which the groups of green algae are combined and named varies considerably between authors.
Algae comprise several different groups of organisms which produce energy through photosynthesis, most conspicuous among the algae are the seaweeds, multicellular algae that may roughly resemble land plants, but are classified among the brown and green algae. Each of these groups includes various microscopic and single-celled organisms
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function, and typically executes the programs instructions in a processing unit. A computer program is written by a computer programmer in a programming language. From the program in its form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a program may be executed with the aid of an interpreter. A part of a program that performs a well-defined task is known as an algorithm. A collection of programs and related data are referred to as software. Computer programs may be categorized along functional lines, such as software or system software. The earliest programmable machines preceded the invention of the digital computer, in 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be weaved and repeated by arranging the cards, in 1837, Charles Babbage was inspired by Jacquards loom to attempt to build the Analytical Engine.
The names of the components of the device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled, the device would have had a store—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the store would have transferred to the mill. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables, after more than 17,000 pounds of the British governments money, the thousands of cogged wheels and gears never fully worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea, the memoir covered the Analytical Engine. The translation contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine and this note is recognized by some historians as the worlds first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine and it is a finite-state machine that has an infinitely long read/write tape.
The machine can move the back and forth, changing its contents as it performs an algorithm
Caesium or cesium is a chemical element with symbol Cs and atomic number 55. It is a soft, silvery-gold alkali metal with a point of 28.5 °C. Caesium is a metal and has physical and chemical properties similar to those of rubidium and potassium. The metal is extremely reactive and pyrophoric, reacting with water even at −116 °C, Caesium is one of the most reactive elements of all, even more reactive than fluorine, the most reactive nonmetal. It is the least electronegative element, with a value of 0.79 on the Pauling scale and it has only one stable isotope, caesium-133. Caesium is mined mostly from pollucite, while the radioisotopes, especially caesium-137, the German chemist Robert Bunsen and physicist Gustav Kirchhoff discovered caesium in 1860 by the newly developed method of flame spectroscopy. The first small-scale applications for caesium were as a getter in vacuum tubes, since then, caesium has been widely used in highly accurate atomic clocks. The radioactive isotope caesium-137 has a half-life of about 30 years and is used in applications, industrial gauges.
Although the element is only toxic, the metal is a hazardous material. It is a ductile, pale metal, which darkens in the presence of trace amounts of oxygen. When in the presence of oil, it loses its metallic lustre and takes on a duller. It has a point of 28.4 °C, making it one of the few elemental metals that are liquid near room temperature. Mercury is the elemental metal with a known melting point lower than caesium. In addition, the metal has a low boiling point,641 °C. Its compounds burn with a blue or violet colour, Caesium forms alloys with the other alkali metals and mercury. At temperatures below 650 °C, it does not alloy with cobalt, molybdenum, platinum, tantalum and it forms well-defined intermetallic compounds with antimony, gallium and thorium, which are photosensitive. It mixes with all the alkali metals, the alloy with a molar distribution of 41% caesium, 47% potassium. A few amalgams have been studied, CsHg 2 is black with a metallic lustre, while CsHg is golden-coloured