Biology is the natural science that studies life and living organisms, including their physical structure, chemical processes, molecular interactions, physiological mechanisms and evolution. Despite the complexity of the science, there are certain unifying concepts that consolidate it into a single, coherent field. Biology recognizes the cell as the basic unit of life, genes as the basic unit of heredity, evolution as the engine that propels the creation and extinction of species. Living organisms are open systems that survive by transforming energy and decreasing their local entropy to maintain a stable and vital condition defined as homeostasis. Sub-disciplines of biology are defined by the research methods employed and the kind of system studied: theoretical biology uses mathematical methods to formulate quantitative models while experimental biology performs empirical experiments to test the validity of proposed theories and understand the mechanisms underlying life and how it appeared and evolved from non-living matter about 4 billion years ago through a gradual increase in the complexity of the system.
See branches of biology. The term biology is derived from the Greek word βίος, bios, "life" and the suffix -λογία, -logia, "study of." The Latin-language form of the term first appeared in 1736 when Swedish scientist Carl Linnaeus used biologi in his Bibliotheca botanica. It was used again in 1766 in a work entitled Philosophiae naturalis sive physicae: tomus III, continens geologian, phytologian generalis, by Michael Christoph Hanov, a disciple of Christian Wolff; the first German use, was in a 1771 translation of Linnaeus' work. In 1797, Theodor Georg August Roose used the term in the preface of a book, Grundzüge der Lehre van der Lebenskraft. Karl Friedrich Burdach used the term in 1800 in a more restricted sense of the study of human beings from a morphological and psychological perspective; the term came into its modern usage with the six-volume treatise Biologie, oder Philosophie der lebenden Natur by Gottfried Reinhold Treviranus, who announced: The objects of our research will be the different forms and manifestations of life, the conditions and laws under which these phenomena occur, the causes through which they have been effected.
The science that concerns itself with these objects we will indicate by the name biology or the doctrine of life. Although modern biology is a recent development, sciences related to and included within it have been studied since ancient times. Natural philosophy was studied as early as the ancient civilizations of Mesopotamia, the Indian subcontinent, China. However, the origins of modern biology and its approach to the study of nature are most traced back to ancient Greece. While the formal study of medicine dates back to Hippocrates, it was Aristotle who contributed most extensively to the development of biology. Important are his History of Animals and other works where he showed naturalist leanings, more empirical works that focused on biological causation and the diversity of life. Aristotle's successor at the Lyceum, wrote a series of books on botany that survived as the most important contribution of antiquity to the plant sciences into the Middle Ages. Scholars of the medieval Islamic world who wrote on biology included al-Jahiz, Al-Dīnawarī, who wrote on botany, Rhazes who wrote on anatomy and physiology.
Medicine was well studied by Islamic scholars working in Greek philosopher traditions, while natural history drew on Aristotelian thought in upholding a fixed hierarchy of life. Biology began to develop and grow with Anton van Leeuwenhoek's dramatic improvement of the microscope, it was that scholars discovered spermatozoa, bacteria and the diversity of microscopic life. Investigations by Jan Swammerdam led to new interest in entomology and helped to develop the basic techniques of microscopic dissection and staining. Advances in microscopy had a profound impact on biological thinking. In the early 19th century, a number of biologists pointed to the central importance of the cell. In 1838, Schleiden and Schwann began promoting the now universal ideas that the basic unit of organisms is the cell and that individual cells have all the characteristics of life, although they opposed the idea that all cells come from the division of other cells. Thanks to the work of Robert Remak and Rudolf Virchow, however, by the 1860s most biologists accepted all three tenets of what came to be known as cell theory.
Meanwhile and classification became the focus of natural historians. Carl Linnaeus published a basic taxonomy for the natural world in 1735, in the 1750s introduced scientific names for all his species. Georges-Louis Leclerc, Comte de Buffon, treated species as artificial categories and living forms as malleable—even suggesting the possibility of common descent. Although he was opposed to evolution, Buffon is a key figure in the history of evolutionary thought. Serious evolutionary thinking originated with the works of Jean-Baptiste Lamarck, the first to present a coherent theory of evolution, he posited that evolution was the result of environmental stress on properties of animals, meaning that the more and rigorously an organ was used, the more complex and efficient it would become, thus adapting the animal to its environment. Lamarck believed that these acquired traits could be passed on to the animal's offspring, who would
Chemistry is the scientific discipline involved with elements and compounds composed of atoms and ions: their composition, properties and the changes they undergo during a reaction with other substances. In the scope of its subject, chemistry occupies an intermediate position between physics and biology, it is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant chemistry, the formation of igneous rocks, how atmospheric ozone is formed and how environmental pollutants are degraded, the properties of the soil on the moon, how medications work, how to collect DNA evidence at a crime scene. Chemistry addresses topics such as how atoms and molecules interact via chemical bonds to form new chemical compounds. There are four types of chemical bonds: covalent bonds, in which compounds share one or more electron; the word chemistry comes from alchemy, which referred to an earlier set of practices that encompassed elements of chemistry, philosophy, astronomy and medicine.
It is seen as linked to the quest to turn lead or another common starting material into gold, though in ancient times the study encompassed many of the questions of modern chemistry being defined as the study of the composition of waters, growth, disembodying, drawing the spirits from bodies and bonding the spirits within bodies by the early 4th century Greek-Egyptian alchemist Zosimos. An alchemist was called a'chemist' in popular speech, the suffix "-ry" was added to this to describe the art of the chemist as "chemistry"; the modern word alchemy in turn is derived from the Arabic word al-kīmīā. In origin, the term is borrowed from the Greek χημία or χημεία; this may have Egyptian origins since al-kīmīā is derived from the Greek χημία, in turn derived from the word Kemet, the ancient name of Egypt in the Egyptian language. Alternately, al-kīmīā may derive from χημεία, meaning "cast together"; the current model of atomic structure is the quantum mechanical model. Traditional chemistry starts with the study of elementary particles, molecules, metals and other aggregates of matter.
This matter can be studied in isolation or in combination. The interactions and transformations that are studied in chemistry are the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together; such behaviors are studied in a chemistry laboratory. The chemistry laboratory stereotypically uses various forms of laboratory glassware; however glassware is not central to chemistry, a great deal of experimental chemistry is done without it. A chemical reaction is a transformation of some substances into one or more different substances; the basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. It can be symbolically depicted through a chemical equation, which involves atoms as subjects; the number of atoms on the left and the right in the equation for a chemical transformation is equal. The type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws.
Energy and entropy considerations are invariably important in all chemical studies. Chemical substances are classified in terms of their structure, phase, as well as their chemical compositions, they can be analyzed using the tools of e.g. spectroscopy and chromatography. Scientists engaged in chemical research are known as chemists. Most chemists specialize in one or more sub-disciplines. Several concepts are essential for the study of chemistry; the particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. Matter can be a mixture of substances; the atom is the basic unit of chemistry. It consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud; the nucleus is made up of positively charged protons and uncharged neutrons, while the electron cloud consists of negatively charged electrons which orbit the nucleus. In a neutral atom, the negatively charged electrons balance out the positive charge of the protons.
The nucleus is dense. The atom is the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state, coordination number, preferred types of bonds to form. A chemical element is a pure substance, composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol Z; the mass number is the sum of the number of neutrons in a nucleus. Although all the nuclei of all atoms belonging to one element will have the same
Home computers were a class of microcomputers that entered the market in 1977, that started with what Byte Magazine called the "trinity of 1977", which became common during the 1980s. They were marketed to consumers as affordable and accessible computers that, for the first time, were intended for the use of a single nontechnical user; these computers were a distinct market segment that cost much less than business, scientific or engineering-oriented computers of the time such as the IBM PC, were less powerful in terms of memory and expandability. However, a home computer had better graphics and sound than contemporary business computers, their most common uses were playing video games, but they were regularly used for word processing, doing homework, programming. Home computers were not electronic kits. There were, commercial kits like the Sinclair ZX80 which were both home and home-built computers since the purchaser could assemble the unit from a kit. Advertisements in the popular press for early home computers were rife with possibilities for their practical use in the home, from cataloging recipes to personal finance to home automation, but these were realized in practice.
For example, using a typical 1980s home computer as a home automation appliance would require the computer to be kept powered on at all times and dedicated to this task. Personal finance and database use required tedious data entry. By contrast, advertisements in the specialty computer press simply listed specifications. If no packaged software was available for a particular application, the home computer user could program one—provided they had invested the requisite hours to learn computer programming, as well as the idiosyncrasies of their system. Since most systems shipped with the BASIC programming language included on the system ROM, it was easy for users to get started creating their own simple applications. Many users found programming to be a fun and rewarding experience, an excellent introduction to the world of digital technology; the line between'business' and'home' computer market segments blurred or vanished once IBM PC compatibles became used in the home, since now both categories of computers use the same processor architectures, operating systems, applications.
The only difference may be the sales outlet through which they are purchased. Another change from the home computer era is that the once-common endeavour of writing one's own software programs has vanished from home computer use; as early as 1965, some experimental projects such as Jim Sutherland's ECHO IV explored the possible utility of a computer in the home. In 1969, the Honeywell Kitchen Computer was marketed as a luxury gift item, would have inaugurated the era of home computing, but none were sold. Computers became affordable for the general public in the 1970s due to the mass production of the microprocessor starting in 1971. Early microcomputers such as the Altair 8800 had front-mounted switches and diagnostic lights to control and indicate internal system status, were sold in kit form to hobbyists; these kits would contain an empty printed circuit board which the buyer would fill with the integrated circuits, other individual electronic components and connectors, hand-solder all the connections.
While two early home computers could be bought either in kit form or assembled, most home computers were only sold pre-assembled. They were enclosed in plastic or metal cases similar in appearance to typewriter or hi-fi equipment enclosures, which were more familiar and attractive to consumers than the industrial metal card-cage enclosures used by the Altair and similar computers; the keyboard - a feature lacking on the Altair - was built into the same case as the motherboard. Ports for plug-in peripheral devices such as a video display, cassette tape recorders and disk drives were either built-in or available on expansion cards. Although the Apple II series had internal expansion slots, most other home computer models' expansion arrangements were through externally accessible'expansion ports' that served as a place to plug in cartridge-based games; the manufacturer would sell peripheral devices designed to be compatible with their computers as extra cost accessories. Peripherals and software were not interchangeable between different brands of home computer, or between successive models of the same brand.
To save the cost of a dedicated monitor, the home computer would connect through an RF modulator to the family TV set, which served as both video display and sound system. By 1982, an estimated 621,000 home computers were in American households, at an average sales price of US$530. After the success of the Radio Shack TRS-80, the Commodore PET and the Apple II in 1977 every manufacturer of consumer electronics rushed to introduce a home computer. Large numbers of new machines of all types began to appear during the early 1980s. Mattel, Texas Instruments and Timex, none of which had any previous connection to the computer industry, all had short-lived home computer lines in the early 1980s; some home computers were more successful – the BBC Micro, Sinclair ZX Spectrum, Atari 800XL and Commodore 64, sold many units over several years and attracted third-party software development. Universally, home computers had a BASIC interpreter combined with a line editor in permanent read-only memory which one could use to type in BASIC programs and execute them
Solid-state physics is the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography and metallurgy. It is the largest branch of condensed matter physics. Solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a theoretical basis of materials science, it has direct applications, for example in the technology of transistors and semiconductors. Solid materials are formed from densely packed atoms; these interactions produce the mechanical, electrical and optical properties of solids. Depending on the material involved and the conditions in which it was formed, the atoms may be arranged in a regular, geometric pattern or irregularly; the bulk of solid-state physics, as a general theory, is focused on crystals. This is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling. Crystalline materials have electrical, optical, or mechanical properties that can be exploited for engineering purposes.
The forces between the atoms in a crystal can take a variety of forms. For example, in a crystal of sodium chloride, the crystal is made up of ionic sodium and chlorine, held together with ionic bonds. In others, the atoms share form covalent bonds. In metals, electrons are shared amongst the whole crystal in metallic bonding; the noble gases do not undergo any of these types of bonding. In solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom; the differences between the types of solid result from the differences between their bonding. The physical properties of solids have been common subjects of scientific inquiry for centuries, but a separate field going by the name of solid-state physics did not emerge until the 1940s, in particular with the establishment of the Division of Solid State Physics within the American Physical Society; the DSSP catered to industrial physicists, solid-state physics became associated with the technological applications made possible by research on solids.
By the early 1960s, the DSSP was the largest division of the American Physical Society. Large communities of solid state physicists emerged in Europe after World War II, in particular in England and the Soviet Union. In the United States and Europe, solid state became a prominent field through its investigations into semiconductors, superconductivity, nuclear magnetic resonance, diverse other phenomena. During the early Cold War, research in solid state physics was not restricted to solids, which led some physicists in the 1970s and 1980s to found the field of condensed matter physics, which organized around common techniques used to investigate solids, liquids and other complex matter. Today, solid-state physics is broadly considered to be the subfield of condensed matter physics that focuses on the properties of solids with regular crystal lattices. Many properties of materials are affected by their crystal structure; this structure can be investigated using a range of crystallographic techniques, including X-ray crystallography, neutron diffraction and electron diffraction.
The sizes of the individual crystals in a crystalline solid material vary depending on the material involved and the conditions when it was formed. Most crystalline materials encountered in everyday life are polycrystalline, with the individual crystals being microscopic in scale, but macroscopic single crystals can be produced either or artificially. Real crystals feature defects or irregularities in the ideal arrangements, it is these defects that critically determine many of the electrical and mechanical properties of real materials. Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics. An early model of electrical conduction was the Drude model, which applied kinetic theory to the electrons in a solid. By assuming that the material contains immobile positive ions and an "electron gas" of classical, non-interacting electrons, the Drude model was able to explain electrical and thermal conductivity and the Hall effect in metals, although it overestimated the electronic heat capacity.
Arnold Sommerfeld combined the classical Drude model with quantum mechanics in the free electron model. Here, the electrons are modelled as a Fermi gas, a gas of particles which obey the quantum mechanical Fermi–Dirac statistics; the free electron model gave improved predictions for the heat capacity of metals, however, it was unable to explain the existence of insulators. The nearly free electron model is a modification of the free electron model which includes a weak periodic perturbation meant to model the interaction between the conduction electrons and the ions in a crystalline solid. By introducing the idea of electronic bands, the theory explains the existence of conductors and insulators; the nearly free electron model rewrites the Schrödinger equation for the case of a periodic potential. The solutions in this case are known as Bloch states. Since Bloch's theorem applies only to periodic potentials, since unceasing random movements of atoms in a crystal disrupt periodicity, this use of Bloch's theorem is only an approximation, but it has proven to be a tremendously valuable approximation, without which most solid-state physics analysis would be intractable.
Deviations from periodici
Accelerator physics is a branch of applied physics, concerned with designing and operating particle accelerators. As such, it can be described as the study of motion and observation of relativistic charged particle beams and their interaction with accelerator structures by electromagnetic fields, it is related to other fields: Microwave engineering. Optics with an emphasis on geometrical optics and laser physics. Computer technology with an emphasis on digital signal processing; the experiments conducted with particle accelerators are not regarded as part of accelerator physics, but belong to, e.g. particle physics, nuclear physics, condensed matter physics or materials physics. The types of experiments done at a particular accelerator facility are determined by characteristics of the generated particle beam such as average energy, particle type and dimensions. While it is possible to accelerate charged particles using electrostatic fields, like in a Cockcroft-Walton voltage multiplier, this method has limits given by electrical breakdown at high voltages.
Furthermore, due to electrostatic fields being conservative, the maximum voltage limits the kinetic energy, applicable to the particles. To circumvent this problem, linear particle accelerators operate using time-varying fields. To control this fields using hollow macroscopic structures through which the particles are passing, the frequency of such acceleration fields is located in the radio frequency region of the electromagnetic spectrum; the space around a particle beam is evacuated to prevent scattering with gas atoms, requiring it to be enclosed in a vacuum chamber. Due to the strong electromagnetic fields that follow the beam, it is possible for it to interact with any electrical impedance in the walls of the beam pipe; this may be in the form of an inductive/capacitive impedance. These impedances will induce wakefields that can interact with particles. Since this interaction may have negative effects, it is studied to determine its magnitude, to determine any actions that may be taken to mitigate it.
Due to the high velocity of the particles, the resulting Lorentz force for magnetic fields, adjustments to the beam direction are controlled by magnetostatic fields that deflect particles. In most accelerator concepts, these are applied by dedicated electromagnets with different properties and functions. An important step in the development of these types of accelerators was the understanding of strong focusing. Dipole magnets are used to guide the beam through the structure, while quadrupole magnets are used for beam focusing, sextupole magnets are used for correction of dispersion effects. A particle on the exact design trajectory of the accelerator only experiences dipole field components, while particles with transverse position deviation x are re-focused to the design orbit. For preliminary calculations, neglecting all fields components higher than quadrupolar, an inhomogenic Hill differential equation d 2 d s 2 x + k x = 1 ρ Δ p p can be used as an approximation, with a non-constant focusing force k, including strong focusing and weak focusing effects the relative deviation from the design beam impulse Δ p / p the trajectory radius of curvature ρ, the design path length s,thus identifying the system as a parametric oscillator.
Beam parameters for the accelerator can be calculated using Ray transfer matrix analysis. The general equations of motion originate from relativistic Hamiltonian mechanics, in all cases using the Paraxial approximation. In the cases of nonlinear magnetic fields, without the paraxial approximation, a Lie transform may be used to construct an integrator with a high degree of accuracy. There are many different software packages available for modeling the different aspects of accelerator physics. One must model the elements that create the electric and magnetic fields, one must model the charged particle evolution within those fields. A popular code for beam dynamics, designed by CERN is Methodical Accelerator Design. A vital component of any accelerator are the diagnostic devices that allow various properties of the particle bunches to be measured. A typical machine may use many different types of measurement device in order to measure different properties; these include Beam Position Monitors to measure the position of th
Richard E. Taylor
Richard Edward Taylor, was a Canadian physicist and Stanford University professor. He shared the 1990 Nobel Prize in Physics with Jerome Friedman and Henry Kendall "for their pioneering investigations concerning deep inelastic scattering of electrons on protons and bound neutrons, which have been of essential importance for the development of the quark model in particle physics." Taylor was born in Alberta. He studied for his BSc and MSc degrees at the University of Alberta in Canada. Newly married, he applied to work for a PhD degree at Stanford University, where he joined the High Energy Physics Laboratory, his PhD thesis was on an experiment using polarised gamma rays to study pion production. After 3 years at the École Normale Supérieure in Paris and a year at the Lawrence Berkeley Laboratory in California, Taylor returned to Stanford. Construction of the Stanford Linear Accelerator Center was beginning. In collaboration with researchers from the California Institute of Technology and the Massachusetts Institute of Technology, Taylor worked on the design and construction of the equipment, was involved in many of the experiments.
In 1971, Taylor was awarded a Guggenheim fellowship that allowed him to spend a sabbatical year at CERN. The experiments run at SLAC in the late 1960s and early 1970s involved scattering high-energy beams of electrons from protons and deuterons and heavier nuclei. At lower energies, it had been found that the electrons would only be scattered through low angles, consistent with the idea that the nucleons had no internal structure. However, the SLAC-MIT experiments showed that higher energy electrons could be scattered through much higher angles, with the loss of some energy; these deep inelastic scattering results provided the first experimental evidence that the protons and neutrons were made up of point-like particles identified to be the up and down quarks, proposed on theoretical grounds. The experiments provided the first evidence for the existence of gluons. Taylor and Kendall were jointly awarded the Nobel Prize in 1990 for this work. Taylor died at his home in Stanford, California near the campus of Stanford University on 22 February 2018 at the age of 88.
Taylor has received numerous awards and honours including
A particle accelerator is a machine that uses electromagnetic fields to propel charged particles to high speeds and energies, to contain them in well-defined beams. Large accelerators are used for basic research in particle physics; the most powerful accelerator is the Large Hadron Collider near Geneva, built by the European collaboration CERN. It is a collider accelerator, which can accelerate two beams of protons to an energy of 6.5 TeV and cause them to collide head-on, creating center-of-mass energies of 13 TeV. Other powerful accelerators are KEKB at KEK in Japan, RHIC at Brookhaven National Laboratory, the Tevatron at Fermilab, Illinois. Accelerators are used as synchrotron light sources for the study of condensed matter physics. Smaller particle accelerators are used in a wide variety of applications, including particle therapy for oncological purposes, radioisotope production for medical diagnostics, ion implanters for manufacture of semiconductors, accelerator mass spectrometers for measurements of rare isotopes such as radiocarbon.
There are more than 30,000 accelerators in operation around the world. There are two basic classes of accelerators: electrodynamic accelerators. Electrostatic accelerators use static electric fields to accelerate particles; the most common types are the Cockcroft -- the Van de Graaff generator. A small-scale example of this class is the cathode ray tube in an ordinary old television set; the achievable kinetic energy for particles in these devices is determined by the accelerating voltage, limited by electrical breakdown. Electrodynamic or electromagnetic accelerators, on the other hand, use changing electromagnetic fields to accelerate particles. Since in these types the particles can pass through the same accelerating field multiple times, the output energy is not limited by the strength of the accelerating field; this class, first developed in the 1920s, is the basis for most modern large-scale accelerators. Rolf Widerøe, Gustav Ising, Leó Szilárd, Max Steenbeck, Ernest Lawrence are considered pioneers of this field and building the first operational linear particle accelerator, the betatron, the cyclotron.
Because colliders can give evidence of the structure of the subatomic world, accelerators were referred to as atom smashers in the 20th century. Despite the fact that most accelerators propel subatomic particles, the term persists in popular usage when referring to particle accelerators in general. Beams of high-energy particles are useful for fundamental and applied research in the sciences, in many technical and industrial fields unrelated to fundamental research, it has been estimated that there are 30,000 accelerators worldwide. Of these, only about 1% are research machines with energies above 1 GeV, while about 44% are for radiotherapy, 41% for ion implantation, 9% for industrial processing and research, 4% for biomedical and other low-energy research; the bar graph shows the breakdown of the number of industrial accelerators according to their applications. The numbers are based on 2012 statistics available from various sources, including production and sales data published in presentations or market surveys, data provided by a number of manufacturers.
For the most basic inquiries into the dynamics and structure of matter and time, physicists seek the simplest kinds of interactions at the highest possible energies. These entail particle energies of many GeV, the interactions of the simplest kinds of particles: leptons and quarks for the matter, or photons and gluons for the field quanta. Since isolated quarks are experimentally unavailable due to color confinement, the simplest available experiments involve the interactions of, leptons with each other, second, of leptons with nucleons, which are composed of quarks and gluons. To study the collisions of quarks with each other, scientists resort to collisions of nucleons, which at high energy may be usefully considered as 2-body interactions of the quarks and gluons of which they are composed, thus elementary particle physicists tend to use machines creating beams of electrons, positrons and antiprotons, interacting with each other or with the simplest nuclei at the highest possible energies hundreds of GeV or more.
The largest and highest energy particle accelerator used for elementary particle physics is the Large Hadron Collider at CERN, operating since 2009. Nuclear physicists and cosmologists may use beams of bare atomic nuclei, stripped of electrons, to investigate the structure and properties of the nuclei themselves, of condensed matter at high temperatures and densities, such as might have occurred in the first moments of the Big Bang; these investigations involve collisions of heavy nuclei – of atoms like iron or gold – at energies of several GeV per nucleon. The largest such particle accelerator is the Relativistic Heavy Ion Collider at Brookhaven National Laboratory. Particle accelerators can produce proton beams, which can produce proton-rich medical or research isotopes as opposed to the neutron-rich ones made in fission reactors. An example of this type of machine is LANSCE at Los Alamos. Besides being of fundamental interest, electrons accelerated in the magnetic field causes the high energy electrons to emit extre