Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships. More it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference". An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships"; the first known use of the term "econometrics" was by Polish economist Paweł Ciompa in 1910. Jan Tinbergen is considered by many to be one of the founding fathers of econometrics. Ragnar Frisch is credited with coining the term in the sense. A basic tool for econometrics is the multiple linear regression model. Econometric theory uses statistical theory and mathematical statistics to evaluate and develop econometric methods. Econometricians try to find estimators that have desirable statistical properties including unbiasedness and consistency.
Applied econometrics uses theoretical econometrics and real-world data for assessing economic theories, developing econometric models, analysing economic history, forecasting. A basic tool for econometrics is the multiple linear regression model. In modern econometrics, other statistical tools are used, but linear regression is still the most used starting point for an analysis. Estimating a linear regression on two variables can be visualised as fitting a line through data points representing paired values of the independent and dependent variables. For example, consider Okun's law, which relates GDP growth to the unemployment rate; this relationship is represented in a linear regression where the change in unemployment rate is a function of an intercept, a given value of GDP growth multiplied by a slope coefficient β 1 and an error term, ε: Δ Unemployment = β 0 + β 1 Growth + ε. The unknown parameters β β 1 can be estimated. Here β 1 is estimated to be −1.77 and β 0 is estimated to be 0.83.
This means that if GDP growth increased by one percentage point, the unemployment rate would be predicted to drop by 1.77 points. The model could be tested for statistical significance as to whether an increase in growth is associated with a decrease in the unemployment, as hypothesized. If the estimate of β 1 were not different from 0, the test would fail to find evidence that changes in the growth rate and unemployment rate were related; the variance in a prediction of the dependent variable as a function of the independent variable is given in polynomial least squares. Econometric theory uses statistical theory and mathematical statistics to evaluate and develop econometric methods. Econometricians try to find estimators that have desirable statistical properties including unbiasedness and consistency. An estimator is unbiased. Ordinary least squares is used for estimation since it provides the BLUE or "best linear unbiased estimator" given the Gauss-Markov assumptions; when these assumptions are violated or other statistical properties are desired, other estimation techniques such as maximum likelihood estimation, generalized method of moments, or generalized least squares are used.
Estimators that incorporate prior beliefs are advocated by those who favour Bayesian statistics over traditional, classical or "frequentist" approaches. Applied econometrics uses theoretical econometrics and real-world data for assessing economic theories, developing econometric models, analysing economic history, forecasting. Econometrics may use standard statistical models to study economic questions, but most they are with observational data, rather than in controlled experiments. In this, the design of observational studies in econometrics is similar to the design of studies in other observational disciplines, such as astronomy, epidemiology and political science. Analysis of data from an observational study is guided by the study protocol, although exploratory data analysis may be useful for generating new hypotheses. Economics analyses systems of equations and inequalities, such as supply and demand hypothesized to be in equilibrium; the field of econometrics has developed methods for identification and estimation of simultaneous-equation models.
These methods are analogous to methods used in other areas of science, such as the field of system identification in systems analysis and control theory. Such methods may allow researchers to estimate models and investigate their empirical consequences, without directly manipulating the system. One of the fundamental statistical methods used by econometricians is regression analysis. Regression methods are important i
Philosophy is the study of general and fundamental questions about existence, values, reason and language. Such questions are posed as problems to be studied or resolved; the term was coined by Pythagoras. Philosophical methods include questioning, critical discussion, rational argument, systematic presentation. Classic philosophical questions include: Is it possible to know anything and to prove it? What is most real? Philosophers pose more practical and concrete questions such as: Is there a best way to live? Is it better to be just or unjust? Do humans have free will? "philosophy" encompassed any body of knowledge. From the time of Ancient Greek philosopher Aristotle to the 19th century, "natural philosophy" encompassed astronomy and physics. For example, Newton's 1687 Mathematical Principles of Natural Philosophy became classified as a book of physics. In the 19th century, the growth of modern research universities led academic philosophy and other disciplines to professionalize and specialize.
In the modern era, some investigations that were traditionally part of philosophy became separate academic disciplines, including psychology, sociology and economics. Other investigations related to art, politics, or other pursuits remained part of philosophy. For example, is beauty objective or subjective? Are there many scientific methods or just one? Is political utopia a hopeful dream or hopeless fantasy? Major sub-fields of academic philosophy include metaphysics, ethics, political philosophy and philosophy of science. Traditionally, the term "philosophy" referred to any body of knowledge. In this sense, philosophy is related to religion, natural science and politics. Newton's 1687 Mathematical Principles of Natural Philosophy is classified in the 2000s as a book of physics. In the first part of the first book of his Academics, Cicero introduced the division of philosophy into logic and ethics. Metaphysical philosophy was the study of existence, God, logic and other abstract objects; this division has changed.
Natural philosophy has split into the various natural sciences astronomy, chemistry and cosmology. Moral philosophy still includes value theory. Metaphysical philosophy has birthed formal sciences such as logic and philosophy of science, but still includes epistemology and others. Many philosophical debates that began in ancient times are still debated today. Colin McGinn and others claim. Chalmers and others, by contrast, see progress in philosophy similar to that in science, while Talbot Brewer argued that "progress" is the wrong standard by which to judge philosophical activity. In one general sense, philosophy is associated with wisdom, intellectual culture and a search for knowledge. In that sense, all cultures and literate societies ask philosophical questions such as "how are we to live" and "what is the nature of reality". A broad and impartial conception of philosophy finds a reasoned inquiry into such matters as reality and life in all world civilizations. Western philosophy is the philosophical tradition of the Western world and dates to Pre-Socratic thinkers who were active in Ancient Greece in the 6th century BCE such as Thales and Pythagoras who practiced a "love of wisdom" and were termed physiologoi.
Socrates was a influential philosopher, who insisted that he possessed no wisdom but was a pursuer of wisdom. Western philosophy can be divided into three eras: Ancient, Medieval philosophy, Modern philosophy; the Ancient era was dominated by Greek philosophical schools which arose out of the various pupils of Socrates, such as Plato, who founded the Platonic Academy and his student Aristotle, founding the Peripatetic school, who were both influential in Western tradition. Other traditions include Cynicism, Greek Skepticism and Epicureanism. Important topics covered by the Greeks included metaphysics, the nature of the well-lived life, the possibility of knowledge and the nature of reason. With the rise of the Roman empire, Greek philosophy was increasingly discussed in Latin by Romans such as Cicero and Seneca. Medieval philosophy is the period following the fall of the Western Roman Empire and was dominated by the ris
Deoxyribonucleic acid is a molecule composed of two chains that coil around each other to form a double helix carrying the genetic instructions used in the growth, development and reproduction of all known organisms and many viruses. DNA and ribonucleic acid are nucleic acids; the two DNA strands are known as polynucleotides as they are composed of simpler monomeric units called nucleotides. Each nucleotide is composed of one of four nitrogen-containing nucleobases, a sugar called deoxyribose, a phosphate group; the nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound together, according to base pairing rules, with hydrogen bonds to make double-stranded DNA; the complementary nitrogenous bases are divided into two groups and purines. In DNA, the pyrimidines are cytosine. Both strands of double-stranded DNA store the same biological information.
This information is replicated as and when the two strands separate. A large part of DNA is non-coding, meaning that these sections do not serve as patterns for protein sequences; the two strands of DNA are thus antiparallel. Attached to each sugar is one of four types of nucleobases, it is the sequence of these four nucleobases along the backbone. RNA strands are created using DNA strands as a template in a process called transcription. Under the genetic code, these RNA strands specify the sequence of amino acids within proteins in a process called translation. Within eukaryotic cells, DNA is organized into long structures called chromosomes. Before typical cell division, these chromosomes are duplicated in the process of DNA replication, providing a complete set of chromosomes for each daughter cell. Eukaryotic organisms store most of their DNA inside the cell nucleus as nuclear DNA, some in the mitochondria as mitochondrial DNA, or in chloroplasts as chloroplast DNA. In contrast, prokaryotes store their DNA only in circular chromosomes.
Within eukaryotic chromosomes, chromatin proteins, such as histones and organize DNA. These compacting structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed. DNA was first isolated by Friedrich Miescher in 1869, its molecular structure was first identified by Francis Crick and James Watson at the Cavendish Laboratory within the University of Cambridge in 1953, whose model-building efforts were guided by X-ray diffraction data acquired by Raymond Gosling, a post-graduate student of Rosalind Franklin. DNA is used by researchers as a molecular tool to explore physical laws and theories, such as the ergodic theorem and the theory of elasticity; the unique material properties of DNA have made it an attractive molecule for material scientists and engineers interested in micro- and nano-fabrication. Among notable advances in this field are DNA origami and DNA-based hybrid materials. DNA is a long polymer made from repeating units called nucleotides.
The structure of DNA is dynamic along its length, being capable of coiling into tight loops and other shapes. In all species it is composed of two helical chains, bound to each other by hydrogen bonds. Both chains are coiled around the same axis, have the same pitch of 34 angstroms; the pair of chains has a radius of 10 angstroms. According to another study, when measured in a different solution, the DNA chain measured 22 to 26 angstroms wide, one nucleotide unit measured 3.3 Å long. Although each individual nucleotide is small, a DNA polymer can be large and contain hundreds of millions, such as in chromosome 1. Chromosome 1 is the largest human chromosome with 220 million base pairs, would be 85 mm long if straightened. DNA does not exist as a single strand, but instead as a pair of strands that are held together; these two long strands coil in the shape of a double helix. The nucleotide contains both a segment of the backbone of a nucleobase. A nucleobase linked to a sugar is called a nucleoside, a base linked to a sugar and to one or more phosphate groups is called a nucleotide.
A biopolymer comprising multiple linked nucleotides is called a polynucleotide. The backbone of the DNA strand is made from alternating sugar residues; the sugar in DNA is 2-deoxyribose, a pentose sugar. The sugars are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings; these are known as the 3′-end, 5′-end carbons, the prime symbol being used to distinguish these carbon atoms from those of the base to which the deoxyribose forms a glycosidic bond. When imagining DNA, each phosphoryl is considered to "belong" to the nucleotide whose 5′ carbon forms a bond therewith. Any DNA strand therefore has one end at which there is a phosphoryl attached to the 5′ carbon of a ribose and another end a
In statistics, quality assurance, survey methodology, sampling is the selection of a subset of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt for the samples to represent the population in question. Two advantages of sampling are lower cost and faster data collection than measuring the entire population; each observation measures one or more properties of observable bodies distinguished as independent objects or individuals. In survey sampling, weights can be applied to the data to adjust for the sample design in stratified sampling. Results from probability theory and statistical theory are employed to guide the practice. In business and medical research, sampling is used for gathering information about a population. Acceptance sampling is used to determine if a production lot of material meets the governing specifications. Successful statistical practice is based on focused problem definition. In sampling, this includes defining the "population".
A population can be defined as including all people or items with the characteristic one wishes to understand. Because there is rarely enough time or money to gather information from everyone or everything in a population, the goal becomes finding a representative sample of that population. Sometimes what defines. For example, a manufacturer needs to decide whether a batch of material from production is of high enough quality to be released to the customer, or should be sentenced for scrap or rework due to poor quality. In this case, the batch is the population. Although the population of interest consists of physical objects, sometimes it is necessary to sample over time, space, or some combination of these dimensions. For instance, an investigation of supermarket staffing could examine checkout line length at various times, or a study on endangered penguins might aim to understand their usage of various hunting grounds over time. For the time dimension, the focus may be on discrete occasions.
In other cases, the examined'population' may be less tangible. For example, Joseph Jagger studied the behaviour of roulette wheels at a casino in Monte Carlo, used this to identify a biased wheel. In this case, the'population' Jagger wanted to investigate was the overall behaviour of the wheel, while his'sample' was formed from observed results from that wheel. Similar considerations arise when taking repeated measurements of some physical characteristic such as the electrical conductivity of copper; this situation arises when seeking knowledge about the cause system of which the observed population is an outcome. In such cases, sampling theory may treat the observed population as a sample from a larger'superpopulation'. For example, a researcher might study the success rate of a new'quit smoking' program on a test group of 100 patients, in order to predict the effects of the program if it were made available nationwide. Here the superpopulation is "everybody in the country, given access to this treatment" – a group which does not yet exist, since the program isn't yet available to all.
Note that the population from which the sample is drawn may not be the same as the population about which information is desired. There is large but not complete overlap between these two groups due to frame issues etc.. Sometimes they may be separate – for instance, one might study rats in order to get a better understanding of human health, or one might study records from people born in 2008 in order to make predictions about people born in 2009. Time spent in making the sampled population and population of concern precise is well spent, because it raises many issues and questions that would otherwise have been overlooked at this stage. In the most straightforward case, such as the sampling of a batch of material from production, it would be most desirable to identify and measure every single item in the population and to include any one of them in our sample. However, in the more general case this is not possible or practical. There is no way to identify all rats in the set of all rats. Where voting is not compulsory, there is no way to identify which people will vote at a forthcoming election.
These imprecise populations are not amenable to sampling in any of the ways below and to which we could apply statistical theory. As a remedy, we seek a sampling frame which has the property that we can identify every single element and include any in our sample; the most straightforward type of frame is a list of elements of the population with appropriate contact information. For example, in an opinion poll, possible sampling frames include an electoral register and a telephone directory. A probability sample is a sample in which every unit in the population has a chance of being selected in the sample, this probability can be determined; the combination of these traits makes it possible to produce unbiased estimates of population totals, by weighting sampled units according to their probability of selection. Example: We want to estimate the total income of adults living in a given street. We visit each household in that street, identify all adults living there, randomly select one adult from each household..
We interview the selected person and find their income
Sedimentation is the tendency for particles in suspension to settle out of the fluid in which they are entrained and come to rest against a barrier. This is due to their motion through the fluid in response to the forces acting on them: these forces can be due to gravity, centrifugal acceleration, or electromagnetism. In geology, sedimentation is used as the opposite of erosion, i.e. the terminal end of sediment transport. In that sense, it includes the termination of transport by true bedload transport. Settling is the falling of suspended particles through the liquid, whereas sedimentation is the termination of the settling process. In estuarine environments, settling can be influenced by the absence of vegetation. Trees such as mangroves are crucial to the attenuation of waves or currents, promoting the settlement of suspended particles. Sedimentation may pertain to objects of various sizes, ranging from large rocks in flowing water to suspensions of dust and pollen particles to cellular suspensions to solutions of single molecules such as proteins and peptides.
Small molecules supply a sufficiently strong force to produce significant sedimentation. The term is used in geology to describe the deposition of sediment which results in the formation of sedimentary rock, but it is used in various chemical and environmental fields to describe the motion of often-smaller particles and molecules; this process is used in the biotech industry to separate cells from the culture media. In a sedimentation experiment, the applied force accelerates the particles to a terminal velocity v t e r m at which the applied force is canceled by an opposing drag force. For small enough particles, the drag force varies linearly with the terminal velocity, i.e. F d r a g = f v t e r m where f depends only on the properties of the particle and the surrounding fluid; the applied force varies linearly with some coupling constant that depends only on the properties of the particle, F a p p = q E a p p. Hence, it is possible to define a sedimentation coefficient s = d e f q / f that depends only on the properties of the particle and the surrounding fluid.
Thus, measuring s can reveal underlying properties of the particle. In many cases, the motion of the particles is blocked by a hard boundary; the concentration of particles at the boundary is opposed by the diffusion of the particles. The sedimentation of a single particle under gravity is described by the Mason–Weaver equation, which has a simple exact solution; the sedimentation coefficient s The sedimentation of a single particle under centrifugal force is described by the Lamm equation, which has an exact solution. The sedimentation coefficient s equals m b / f, where m b is the buoyant mass. However, the Lamm equation differs from the Mason–Weaver equation because the centrifugal force depends on radius from the origin of rotation, whereas in the Mason–Weaver equation gravity is constant; the Lamm equation has extra terms, since it pertains to sector-shaped cells, whereas the Mason–Weaver equation is one-dimensional. Classification of sedimentation: Type 1 sedimentation is characterized by particles that settle discretely at a constant settling velocity,or by the deposition of Iron-Rich minerals to streamlines down to the point source.
They do not flocculate or stick to other during settling. Example: sand and grit material Type 2 sedimentation is characterized by particles that flocculate during sedimentation and because of this their size is changing and therefore their settling velocity is changing. Example: alum or iron coagulation Type 3 sedimentation is known as zone sedimentation. In this process the particles are at a high concentration such that the particles tend to settle as a mass and a distinct clear zone and sludge zone are present. Zone settling occurs in lime-softening, active sludge sedimentation and sludge thickeners. In geology, sedimentation is the deposition of particles carried by a fluid flow. For suspended load, this can be expressed mathematically by the Exner equation, results in the formation of depositional landforms and the rocks that constitute sedimentary record. An undesired increased transport and sedimentation of suspended material is called siltation, it is a major source of pollution in waterways in some parts of the world.
High sedimentation rates can be a result of poor land management and a high frequency of flooding events. If not managed properly, it can be detrimental to fragile ecosystems on the receiving end, such as coral reefs. Climate change affects siltation rates. In chemistry, sedimentation has
Weathering is the breaking down of rocks and minerals as well as wood and artificial materials through contact with the Earth's atmosphere and biological organisms. Weathering occurs in situ, that is, in the same place, with little or no movement, thus should not be confused with erosion, which involves the movement of rocks and minerals by agents such as water, snow, wind and gravity and being transported and deposited in other locations. Two important classifications of weathering processes exist – physical and chemical weathering. Mechanical or physical weathering involves the breakdown of rocks and soils through direct contact with atmospheric conditions, such as heat, water and pressure; the second classification, chemical weathering, involves the direct effect of atmospheric chemicals or biologically produced chemicals known as biological weathering in the breakdown of rocks and minerals. While physical weathering is accentuated in cold or dry environments, chemical reactions are most intense where the climate is wet and hot.
However, both types of weathering occur together, each tends to accelerate the other. For example, physical abrasion decreases the size of particles and therefore increases their surface area, making them more susceptible to chemical reactions; the various agents act in concert to convert primary minerals to secondary minerals and release plant nutrient elements in soluble forms. The materials left over after the rock breaks down combined with organic material creates soil; the mineral content of the soil is determined by the parent material. In addition, many of Earth's landforms and landscapes are the result of weathering processes combined with erosion and re-deposition. Physical weathering called mechanical weathering or disaggregation, is the class of processes that causes the disintegration of rocks without chemical change; the primary process in physical weathering is abrasion. However and physical weathering go hand in hand. Physical weathering can occur due to temperature, frost etc. For example, cracks exploited by physical weathering will increase the surface area exposed to chemical action, thus amplifying the rate of disintegration.
Abrasion by water and wind processes loaded with sediment can have tremendous cutting power, as is amply demonstrated by the gorges and valleys around the world. In glacial areas, huge moving ice masses embedded with soil and rock fragments grind down rocks in their path and carry away large volumes of material. Plant roots pry them apart, resulting in some disintegration. However, such biotic influences are of little importance in producing parent material when compared to the drastic physical effects of water, ice and temperature change. Thermal stress weathering, sometimes called insolation weathering, results from the expansion and contraction of rock, caused by temperature changes. For example, heating of rocks by sunlight or fires can cause expansion of their constituent minerals; as some minerals expand more than others, temperature changes set up differential stresses that cause the rock to crack apart. Because the outer surface of a rock is warmer or colder than the more protected inner portions, some rocks may weather by exfoliation – the peeling away of outer layers.
This process may be accelerated if ice forms in the surface cracks. When water freezes, it expands with a force of about 1465 Mg/m^2, disintegrating huge rock masses and dislodging mineral grains from smaller fragments. Thermal stress weathering comprises thermal shock and thermal fatigue. Thermal stress weathering is an important mechanism in deserts, where there is a large diurnal temperature range, hot in the day and cold at night; the repeated heating and cooling exerts stress on the outer layers of rocks, which can cause their outer layers to peel off in thin sheets. The process of peeling off is called exfoliation. Although temperature changes are the principal driver, moisture can enhance thermal expansion in rock. Forest fires and range fires are known to cause significant weathering of rocks and boulders exposed along the ground surface. Intense localized heat can expand a boulder; the thermal heat from wildfire can cause significant weathering of rocks and boulders, heat can expand a boulder and thermal shock can occur.
The differential expansion of a thermal gradient can be understood in terms of stress or of strain, equivalently. At some point, this stress can exceed the strength of the material. If nothing stops this crack from propagating through the material, it will result in the object's structure to fail. Frost weathering called ice wedging or cryofracturing, is the collective name for several processes where ice is present; these processes include frost frost-wedging and freeze -- thaw weathering. Severe frost shattering produces huge piles of rock fragments called scree which may be located at the foot of mountain areas or along slopes. Frost weathering is common in mountain areas where the temperature is around the freezing point of water. Certain frost-susceptible soils expand or heave upon freezing as a result of water migrating via capillary action to grow ice lenses nea
Transduction is the process by which foreign DNA is introduced into a cell by a virus or viral vector. An example is the viral transfer of DNA from one bacterium to another and hence an example of horizontal gene transfer. Transduction does not require physical contact between the cell donating the DNA and the cell receiving the DNA, it is DNase resistant. Transduction is a common tool used by molecular biologists to stably introduce a foreign gene into a host cell's genome; when viruses, including bacteriophages, infect bacterial cells, their normal mode of reproduction is to harness the replicational and translation machinery of the host bacterial cell to make numerous virions, or complete viral particles, including the viral DNA or RNA and the protein coat. Transduction was discovered by Norton Zinder and Joshua Lederberg at the University of Wisconsin–Madison in 1952 in Salmonella. Transduction happens through either the lysogenic cycle. If the lysogenic cycle is adopted, the phage chromosome is integrated into the bacterial chromosome, where it can stay dormant for thousands of generations.
If the lysogen is induced, the phage genome is excised from the bacterial chromosome and initiates the lytic cycle, which culminates in lysis of the cell and the release of phage particles. The lytic cycle leads to the production of new phage particles which are released by lysis of the host; the packaging of bacteriophage DNA has low fidelity and small pieces of bacterial DNA, together with the bacteriophage genome, may become packaged into the bacteriophage genome. At the same time, some phage genes are left behind in the bacterial chromosome. There are three types of recombination events that can lead to this incorporation of bacterial DNA into the viral DNA, leading to two modes of genetic recombination. Generalized transduction is the process by which any bacterial DNA may be transferred to another bacterium via a bacteriophage, it is a rare event. In essence, this is the packaging of bacterial DNA into a viral envelope; this may occur in two main ways and headful packaging. If bacteriophages undertake the lytic cycle of infection upon entering a bacterium, the virus will take control of the cell's machinery for use in replicating its own viral DNA.
If by chance bacterial chromosomal DNA is inserted into the viral capsid, used to encapsulate the viral DNA, the mistake will lead to generalized transduction. If the virus replicates using'headful packaging', it attempts to fill the nucleocapsid with genetic material. If the viral genome results in spare capacity, viral packaging mechanisms may incorporate bacterial genetic material into the new virion; the new virus capsule now loaded with part bacterial DNA continues to infect another bacterial cell. This bacterial material may become recombined into another bacterium upon infection; when the new DNA is inserted into this recipient cell it can fall to one of three fates The DNA will be absorbed by the cell and be recycled for spare parts. If the DNA was a plasmid, it will re-circularize inside the new cell and become a plasmid again. If the new DNA matches with a homologous region of the recipient cell's chromosome, it will exchange DNA material similar to the actions in bacterial recombination.
Specialized transduction is the process by which a restricted set of bacterial genes is transferred to another bacterium. The genes that get transferred depend on. Specialized transduction occurs when the prophage excises imprecisely from the chromosome so that bacterial genes lying adjacent to the prophage are included in the excised DNA; the excised DNA is packaged into a new virus particle, which delivers the DNA to a new bacterium, where the donor genes can be inserted into the recipient chromosome or remain in the cytoplasm, depending on the nature of the bacteriophage. When the encapsulated phage material infects another cell and becomes a "prophage", the coded prophage DNA is called a "heterogenote". An example of specialized transduction is λ phage in Escherichia coli. Transduction with viral vectors can be used to modify genes in mammalian cells, it is used as a tool in basic research and is researched as a potential means for gene therapy. In these cases, a plasmid is constructed in which the genes to be transferred are flanked by viral sequences that are used by viral proteins to recognize and package the viral genome into viral particles.
This plasmid is inserted into a producer cell together with other plasmids that carry the viral genes required for formation of infectious virions. In these producer cells, the viral proteins expressed by these packaging constructs bind the sequences on the DNA/RNA to be transferred and insert it into viral particles. For safety, none of the plasmids used contains all the sequences required for virus formation, so that simultaneous transfection of multiple plasmids is required to get infectious virions. Moreover, only the plasmid carrying the sequences to be transferred contains signals that allow the genetic materials to be packaged in virions, so that none of the genes encoding viral proteins are packaged. Viruses collected from these cells are applied to the cells to be altered; the initial stages of these infections mimic infection with natural viru