Cathode rays are streams of electrons observed in vacuum tubes. If an evacuated glass tube is equipped with two electrodes and a voltage is applied, glass behind the positive electrode is observed to glow, due to electrons emitted from the cathode, they were first observed in 1869 by German physicist Johann Wilhelm Hittorf, were named in 1876 by Eugen Goldstein Kathodenstrahlen, or cathode rays. In 1897, British physicist J. J. Thomson showed that cathode rays were composed of a unknown negatively charged particle, named the electron. Cathode ray tubes use a focused beam of electrons deflected by electric or magnetic fields to render an image on a screen. Cathode rays are so named because they are emitted by the negative electrode, or cathode, in a vacuum tube. To release electrons into the tube, they first must be detached from the atoms of the cathode. In the early cold cathode vacuum tubes, called Crookes tubes, this was done by using a high electrical potential of thousands of volts between the anode and the cathode to ionize the residual gas atoms in the tube.
The positive ions were accelerated by the electric field toward the cathode, when they collided with it they knocked electrons out if its surface. Modern vacuum tubes use thermionic emission, in which the cathode is made of a thin wire filament, heated by a separate electric current passing through it; the increased random heat motion of the filament knocks electrons out of the surface of the filament, into the evacuated space of the tube. Since the electrons have a negative charge, they are repelled by the negative cathode and attracted to the positive anode, they travel in straight lines through the empty tube. The voltage applied between the electrodes accelerates these low mass particles to high velocities. Cathode rays are invisible, but their presence was first detected in early vacuum tubes when they struck the glass wall of the tube, exciting the atoms of the glass and causing them to emit light, a glow called fluorescence. Researchers noticed that objects placed in the tube in front of the cathode could cast a shadow on the glowing wall, realized that something must be travelling in straight lines from the cathode.
After the electrons reach the anode, they travel through the anode wire to the power supply and back to the cathode, so cathode rays carry electric current through the tube. The current in a beam of cathode rays through a vacuum tube can be controlled by passing it through a metal screen of wires between cathode and anode, to which a small negative voltage is applied; the electric field of the wires deflects some of the electrons, preventing them from reaching the anode. The amount of current that gets through to the anode depends on the voltage on the grid. Thus, a small voltage on the grid can be made to control a much larger voltage on the anode; this is the principle used in vacuum tubes to amplify electrical signals. The triode vacuum tube developed between 1907 and 1914 was the first electronic device that could amplify, is still used in some applications such as radio transmitters. High speed beams of cathode rays can be steered and manipulated by electric fields created by additional metal plates in the tube to which voltage is applied, or magnetic fields created by coils of wire.
These are used in cathode ray tubes, found in televisions and computer monitors, in electron microscopes. After the 1654 invention of the vacuum pump by Otto von Guericke, physicists began to experiment with passing high voltage electricity through rarefied air. In 1705, it was noted that electrostatic generator sparks travel a longer distance through low pressure air than through atmospheric pressure air. In 1838, Michael Faraday applied a high voltage between two metal electrodes at either end of a glass tube, evacuated of air, noticed a strange light arc with its beginning at the cathode and its end are at the anode. In 1857, German physicist and glassblower Heinrich Geissler sucked more air out with an improved pump, to a pressure of around 10−3 atm and found that, instead of an arc, a glow filled the tube; the voltage applied between the two electrodes of the tubes, generated by an induction coil, was anywhere between a few kilovolts and 100 kV. These were called Geissler tubes, similar to today's neon signs.
The explanation of these effects was that the high voltage accelerated free electrons and electrically charged atoms present in the air of the tube. At low pressure, there was enough space between the gas atoms that the electrons could accelerate to high enough speeds that when they struck an atom they knocked electrons off of it, creating more positive ions and free electrons, which went on to create more ions and electrons in a chain reaction, known as a glow discharge; the positive ions were attracted to the cathode and when they struck it knocked more electrons out of it, which were attracted toward the anode. Thus the ionized air was electrically conductive and an electric current flowed through the tube. Geissler tubes had enough air in them that the electrons could only travel a tiny distance before colliding with an atom; the electrons in these tubes moved in a slow diffusion process, never gaining much speed, so these tubes didn't produce cathode rays. Instead, they produced a colorful glow discharge, caused when the electrons struck gas atoms, exciting their orbital electrons to higher energy levels.
The electrons released this energy as light. This process is called fluorescence. By the 1870s, British physicist William Crookes and others were able to evacuate t
Nuclear propulsion includes a wide variety of propulsion methods that fulfill the promise of the Atomic Age by using some form of nuclear reaction as their primary power source. The idea of using nuclear material for propulsion dates back to the beginning of the 20th century. In 1903 it was hypothesised that radioactive material, might be a suitable fuel for engines to propel cars and planes. H. G. Wells picked up this idea in his 1914 fiction work The World Set Free. Nuclear-powered vessels are military submarines, aircraft carriers. Russia and America are the only countries that have nuclear-powered civilian surface ships, including icebreakers and Aircraft carriers. America has 11 Aircraft carriers in service, all are powered by nuclear reactors, they use nuclear reactors as their power plants. For more detailed articles see: Nuclear marine propulsion for civil use List of civilian nuclear ships Nuclear navy List of United States Naval reactors Soviet naval reactors Nuclear submarine Russia's Channel One Television news broadcast a picture and details of a nuclear-powered torpedo called Status-6 on about 12 November 2015.
The torpedo was stated as having a range of up to 10,000 km, a cruising speed of 100 knots, operational depth of up to 1000 metres below the surface. The torpedo carried a 100-megaton nuclear warhead. One of the suggestions emerging in the summer of 1958 from the first meeting of the scientific advisory group that became JASON was for "a nuclear-powered torpedo that could roam the seas indefinitely". Research into nuclear-powered aircraft was pursued during the Cold War by the United States and the Soviet Union as they would allow a country to keep nuclear bombers in the air for long periods of time, a useful tactic for nuclear deterrence. Neither country created any operational nuclear aircraft. One design problem, never adequately solved, was the need for heavy shielding to protect the crew from radiation sickness. Since the advent of ICBMs in the 1960s the tactical advantage of such aircraft was diminished and respective projects were cancelled; because the technology was inherently dangerous it was not considered in non-military contexts.
Nuclear-powered missiles were researched and discounted during the same period. Aircraft Convair X-6 Myasishchev M-50 - Aviation Week hoax Aircraft Nuclear Propulsion - General Electric's project to build a nuclear-powered bomber Tupolev Tu-95LALMissiles Project Pluto - which developed the SLAM missile, that used a nuclear-powered air ramjet for propulsion Burevestnik nuclear-powered cruise missile announced by Vladimir Putin in 2018. Many types of nuclear propulsion have been proposed, some of them tested for spacecraft applications. Project Orion, first engineering design study of nuclear pulse propulsion Project Daedalus, 1970s British Interplanetary Society study of a fusion rocket Project Longshot, US Naval Academy-NASA nuclear pulse propulsion design AIMStar, a proposed Antimatter-catalyzed nuclear pulse propulsion craft that uses clouds of antiprotons to initiate fission and fusion within fuel pellets ICAN-II, a proposed manned interplanetary spacecraft that used the Antimatter-catalyzed nuclear pulse propulsion engine as its main form of propulsion External Pulsed Plasma Propulsion, a propulsion concept by NASA that derives its thrust from plasma waves generated from a series of small, supercritical fission/fusion pulses behind an object in space.
Bimodal Nuclear Thermal Rockets conduct nuclear fission reactions similar to those safely employed at nuclear power plants including submarines. The energy is used to heat the liquid hydrogen propellant. Advocates of nuclear-powered spacecraft point out that at the time of launch, there is no radiation released from the nuclear reactors; the nuclear-powered rockets are not used to lift off the Earth. Nuclear thermal rockets can provide great performance advantages compared to chemical propulsion systems. Nuclear power sources could be used to provide the spacecraft with electrical power for operations and scientific instrumentation. NERVA - NASA's Nuclear Energy for Rocket Vehicle Applications, a US nuclear thermal rocket program Project Rover - an American project to develop a nuclear thermal rocket; the program ran at the Los Alamos Scientific Laboratory from 1955 through 1972. Project Timberwind 1987-1991 Bussard ramjet, a conceptual interstellar fusion ramjet named after Robert W. Bussard.
Fission fragment rocket Fission sail Fusion rocket Gas core reactor rocket Nuclear salt-water rocket Radioisotope rocket Nuclear photonic rocket Nuclear electric rocket Project Prometheus, NASA development of nuclear propulsion for long-duration spaceflight, begun in 2003 Anatolij Perminov, head of the Russian Federal Space Agency, announced that it is going to develop a nuclear-powered spacecraft for deep space travel. Preliminary design was done by 2013, 9 more years are planned for development; the price is set at 17 billion rubles. The nuclear propulsion would have mega-watt class, provided necessary funding, Roscosmos Head stated; this system would consist of a matrix of ion engines. "... Hot inert gas temperature of 1500 °C from the reactor turns turbines; the turbine turns the generator and compressor, which circulates the working fluid in a closed circuit. The working fluid is cooled in the radiator; the generator produces electricity for the same ion engine..." According to him, the propulsion will be able to support human mission to Mars, with cosmonauts staying on the Red planet for 30 days.
This journey to Mars with nuclear propulsion and a steady acceleration would take six weeks, instead of eight months by using chemical pro
Partial Nuclear Test Ban Treaty
The Partial Test Ban Treaty is the abbreviated name of the 1963 Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water, which prohibited all test detonations of nuclear weapons except for those conducted underground. It is abbreviated as the Limited Test Ban Treaty and Nuclear Test Ban Treaty, though the latter may refer to the Comprehensive Nuclear-Test-Ban Treaty, which succeeded the PTBT for ratifying parties. Negotiations focused on a comprehensive ban, but this was abandoned due to technical questions surrounding the detection of underground tests and Soviet concerns over the intrusiveness of proposed verification methods; the impetus for the test ban was provided by rising public anxiety over the magnitude of nuclear tests tests of new thermonuclear weapons, the resulting nuclear fallout. A test ban was seen as a means of slowing nuclear proliferation and the nuclear arms race. Though the PTBT did not halt proliferation or the arms race, its enactment did coincide with a substantial decline in the concentration of radioactive particles in the atmosphere.
The PTBT was signed by the governments of the Soviet Union, United Kingdom, United States in Moscow on 5 August 1963 before being opened for signature by other countries. The treaty formally went into effect on 10 October 1963. Since 123 other states have become party to the treaty. Ten states have signed but not ratified the treaty. Much of the stimulus for the treaty was increasing public unease about radioactive fallout as a result of above-ground or underwater nuclear testing given the increasing power of nuclear devices, as well as concern about the general environmental damage caused by testing. In 1952–53, the US and Soviet Union detonated their first thermonuclear weapons, far more powerful than the atomic bombs tested and deployed since 1945. In 1954, the US Castle Bravo test at Bikini Atoll had a yield of 15 megatons of TNT, more than doubling the expected yield; the Castle Bravo test resulted in the worst radiological event in US history as radioactive particles spread over more than 11,000 square kilometers, affected inhabited areas, sickened Japanese fishermen aboard the Lucky Dragon upon whom "ashes of death" had rained.
In the same year, a Soviet test sent radioactive particles over Japan. Around the same time, victims of the atomic bombing of Hiroshima visited the US for medical care, which attracted significant public attention. In 1961, the Soviet Union tested the Tsar Bomba, which had a yield of 50 megatons and remains the most powerful man-made explosion in history, though due to a efficient detonation fallout was limited. Between 1951 and 1958, the US conducted 166 atmospheric tests, the Soviet Union conducted 82, Britain conducted 21. In 1945, Britain and Canada made an early call for an international discussion on controlling atomic power. At the time, the US had yet to formulate a cohesive strategy on nuclear weapons. Taking advantage of this was Vannevar Bush, who had initiated and administered the Manhattan Project, but had a long-term policy goal of banning on nuclear weapons production; as a first step in this direction, Bush proposed an international agency dedicated to nuclear control. Bush unsuccessfully argued in 1952 that the US pursue a test ban agreement with the Soviet Union before testing its first thermonuclear weapon, but his interest in international controls was echoed in the 1946 Acheson–Lilienthal Report, commissioned by President Harry S. Truman to help construct US nuclear weapons policy.
J. Robert Oppenheimer, who had led Los Alamos National Laboratory during the Manhattan Project, exerted significant influence over the report in its recommendation of an international body that would control production of and research on the world's supply of uranium and thorium. A version of the Acheson-Lilienthal plan was presented to the United Nations Atomic Energy Commission as the Baruch Plan in June 1946; the Baruch Plan proposed that an International Atomic Development Authority would control all research on and material and equipment involved in the production of atomic energy. Though Dwight D. Eisenhower the Chief of Staff of the United States Army, was not a significant figure in the Truman administration on nuclear questions, he did support Truman's nuclear control policy, including the Baruch Plan's provision for an international control agency, provided that the control system was accompanied by "a system of free and complete inspection." The Soviet Union dismissed the Baruch Plan as a US attempt to secure its nuclear dominance, called for the US to halt weapons production and release technical information on its program.
The Acheson–Lilienthal paper and Baruch Plan would serve as the basis for US policy into the 1950s. Between 1947 and 1954, the US and Soviet Union discussed their demands within the United Nations Commission for Conventional Disarmament. A series of events in 1954, including the Castle Bravo test and spread of fallout from a Soviet test over Japan, redirected the international discussion on nuclear policy. Additionally, by 1954, both US and Soviet Union had assembled large nuclear stockpiles, reducing hopes of complete disarmament. In the early years of the Cold War, the US approach to nuclear control reflected a strain between an interest in controlling nuclear weapons and a belief that dominance in the nuclear arena given the size of Soviet conventional forces, was critical to US security. Interest in nuclear control and efforts to stall proliferation of we
Analog Science Fiction and Fact
Analog Science Fiction and Fact is an American science fiction magazine published under various titles since 1930. Titled Astounding Stories of Super-Science, the first issue was dated January 1930, published by William Clayton, edited by Harry Bates. Clayton went bankrupt in 1933 and the magazine was sold to Street & Smith; the new editor was F. Orlin Tremaine, who soon made Astounding the leading magazine in the nascent pulp science fiction field, publishing well-regarded stories such as Jack Williamson's Legion of Space and John W. Campbell's "Twilight". At the end of 1937, Campbell took over editorial duties under Tremaine's supervision, the following year Tremaine was let go, giving Campbell more independence. Over the next few years Campbell published many stories that became classics in the field, including Isaac Asimov's Foundation series, A. E. van Vogt's Slan, several novels and stories by Robert A. Heinlein; the period beginning with Campbell's editorship is referred to as the Golden Age of Science Fiction.
By 1950, new competition had appeared from Galaxy Science Fiction and The Magazine of Fantasy & Science Fiction. Campbell's interest in some pseudo-science topics, such as dianetics, alienated some of his regular writers, Astounding was no longer regarded as the leader of the field, though it did continue to publish popular and influential stories: Hal Clement's novel Mission of Gravity appeared in 1953, Tom Godwin's "The Cold Equations" appeared the following year. In 1960, Campbell changed the title of the magazine to Analog Science Fact. At about the same time Street & Smith sold the magazine to Condé Nast. Campbell remained as editor until his death in 1971. Ben Bova took over from 1972 to 1978, the character of the magazine changed noticeably, since Bova was willing to publish fiction that included sexual content and profanity. Bova published stories such as Frederik Pohl's "The Gold at the Starbow's End", nominated for both a Hugo and Nebula Award, Joe Haldeman's "Hero", the first story in the Hugo and Nebula Award-winning "Forever War" sequence.
Bova won five consecutive Hugo Awards for his editing of Analog. Bova was followed by Stanley Schmidt, who continued to publish many of the same authors, contributing for years; the title was sold to Davis Publications in 1980 to Dell Magazines in 1992. Crosstown Publications remains the publisher. Schmidt continued to edit the magazine until 2012. In 1926, Hugo Gernsback launched the first science fiction magazine. Gernsback had been printing scientific fiction stories for some time in his hobbyist magazines, such as Modern Electrics and Electrical Experimenter, but decided that interest in the genre was sufficient to justify a monthly magazine. Amazing was successful reaching a circulation over 100,000. William Clayton, a successful and well-respected publisher of several pulp magazines, considered starting a competitive title in 1928. Clayton was unconvinced, but the following year decided to launch a new magazine because the sheet on which the color covers of his magazines were printed had a space for one more cover.
He suggested to Harry Bates, a newly hired editor, that they start a magazine of historical adventure stories. Bates proposed instead a science fiction pulp, to be titled Astounding Stories of Super Science, Clayton agreed. Astounding was published by Publisher's Fiscal Corporation, a subsidiary of Clayton Magazines; the first issue appeared with Bates as editor. Bates aimed for straightforward action-adventure stories, with scientific elements only present to provide minimal plausibility. Clayton paid much better rates than Amazing and Wonder Stories—two cents a word on acceptance, rather than half a cent a word, on publication —and Astounding attracted some of the better-known pulp writers, such as Murray Leinster, Victor Rousseau, Jack Williamson. In February 1931, the original name Astounding Stories of Super-Science was shortened to Astounding Stories; the magazine was profitable. A publisher would pay a printer three months in arrears, but when a credit squeeze began in May 1931, it led to pressure to reduce this delay.
The financial difficulties led Clayton to start alternating the publication of his magazines, he switched Astounding to a bimonthly schedule with the June 1932 issue. Some printers bought the magazines which were indebted to them: Clayton decided to buy his printer to prevent this from happening; this proved a disastrous move. Clayton did not have the money to complete the transaction, in October 1932, Clayton decided to cease publication of Astounding, with the expectation that the January 1933 issue would be the last one; as it turned out, enough stories were in inventory, enough paper was available, to publish one further issue, so the last Clayton Astounding was dated March 1933. In April, Clayton went bankrupt, sold his magazine titles to T. R. Foley for $100. Science fiction was not a departure for Street & Smith. They
Nuclear pulse propulsion
Nuclear pulse propulsion or external pulsed plasma propulsion, is a hypothetical method of spacecraft propulsion that uses nuclear explosions for thrust. It was first developed as Project Orion by DARPA, after a suggestion by Stanislaw Ulam in 1947. Newer designs using inertial confinement fusion have been the baseline for most post-Orion designs, including Project Daedalus and Project Longshot. Project Orion was the first serious attempt to design a nuclear pulse rocket; the design effort was carried out at General Atomics in early 1960s. The idea of Orion was to react small directional nuclear explosives utilizing a variant of the Teller-Ulam two-stage bomb design against a large steel pusher plate attached to the spacecraft with shock absorbers. Efficient directional explosives maximized the momentum transfer, leading to specific impulses in the range of 6,000 seconds, or about thirteen times that of the Space Shuttle main engine. With refinements a theoretical maximum of 100,000 seconds might be possible.
Thrusts were in the millions of tons, allowing spacecraft larger than 8×106 tons to be built with 1958 materials. The reference design was to be constructed of steel using submarine-style construction with a crew of more than 200 and a vehicle takeoff weight of several thousand tons; this low-tech single-stage reference design would reach Mars and back in four weeks from the Earth's surface. The same craft could visit Saturn's moons in a seven-month mission. A number of engineering problems were found and solved over the course of the project, notably related to crew shielding and pusher-plate lifetime; the system appeared to be workable when the project was shut down in 1965, the main reason being given that the Partial Test Ban Treaty made it illegal. There were ethical issues with launching such a vehicle within the Earth's magnetosphere: calculations using the now disputed linear no-threshold model of radiation damage showed that the fallout from each takeoff would kill between 1 and 10 people.
In a threshold model, such low levels of thinly distributed radiation would have no associated ill-effects, while under hormesis models, such tiny doses would be negligibly beneficial. With the possible use of less efficient clean nuclear bombs for achieving orbit and more efficient higher yield dirty bombs for travel would bring down the amount of fallout caused from an Earth-based launch by a significant factor. One useful mission for this near-term technology would be to deflect an asteroid that could collide with the Earth, depicted in the 1998 film Deep Impact though it was a comet in that particular film; the high performance would permit a late launch to succeed, the vehicle could transfer a large amount of kinetic energy to the asteroid by simple impact, in the event of an imminent asteroid impact a few predicted deaths from fallout would not be considered prohibitive. An automated mission would eliminate the most problematic issues of the design: the shock absorbers. Orion is one of few interstellar space drives that could theoretically be constructed with available technology, as discussed in a 1968 paper, Interstellar Transport by Freeman Dyson.
Project Daedalus was a study conducted between 1973 and 1978 by the British Interplanetary Society to design a plausible interstellar unmanned spacecraft that could reach a nearby star within one human scientist's working lifetime or about 50 years. A dozen scientists and engineers led by Alan Bond worked on the project. At the time fusion research appeared to be making great strides, in particular, inertial confinement fusion appeared to be adaptable as a rocket engine. ICF uses small pellets of fusion fuel lithium deuteride with a small deuterium/tritium trigger at the center; the pellets are thrown into a reaction chamber where they are hit on all sides by lasers or another form of beamed energy. The heat generated by the beams explosively compresses the pellet, to the point where fusion takes place; the result is a hot plasma, a small "explosion" compared to the minimum size bomb that would be required to instead create the necessary amount of fission. For Daedalus, this process was run within a large electromagnet.
After the reaction, ignited by electron beams in this case, the magnet funnelled the hot gas to the rear for thrust. Some of the energy was diverted to run engine. In order to make the system safe and energy efficient, Daedalus was to be powered by a helium-3 fuel that would have had to be collected from Jupiter; the Medusa design is a type of nuclear pulse propulsion which has more in common with solar sails than with conventional rockets. It was envisioned by Johndale Solem in the 1990s and published in the Journal of the British Interplanetary Society. A Medusa spacecraft would deploy a large "spinnaker" sail ahead of it, attached by separate independent cables, launch nuclear explosives forward to detonate between itself and its sail; the sail would be accelerated by the plasma and photonic impulse, running out the tethers as when a fish flees the fisherman, generating electricity at the "reel". The spacecraft would use some of the generated electricity to reel itself up towards the sail smoothly accelerating as it goes.
In the original design, multiple tethers connected to multiple motor generators. The
Specific impulse is a measure of how a rocket uses propellant or a jet engine uses fuel. By definition, it is the total impulse delivered per unit of propellant consumed and is dimensionally equivalent to the generated thrust divided by the propellant mass flow rate or weight flow rate. If mass is used as the unit of propellant specific impulse has units of velocity. If weight is used instead specific impulse has units of time. Multiplying flow rate by the standard gravity converts specific impulse from the mass basis to the weight basis. A propulsion system with a higher specific impulse uses the mass of the propellant more in creating forward thrust and, in the case of a rocket, less propellant needed for a given delta-v, per the Tsiolkovsky rocket equation. In rockets, this means the engine is more effective at gaining altitude and velocity; this effectiveness is less important in jet engines that employ wings and use outside air for combustion and carry payloads that are much heavier than the propellant.
Specific impulse includes the contribution to impulse provided by external air, used for combustion and is exhausted with the spent propellant. Jet engines use outside air, therefore have a much higher specific impulse than rocket engines; the specific impulse in terms of propellant mass spent has units of distance per time, a notional velocity called the effective exhaust velocity. This is higher than the actual exhaust velocity because the mass of the combustion air is not being accounted for. Actual and effective exhaust velocity are the same in rocket engines not utilizing air or other intake propellant such as water. Specific impulse is inversely proportional to specific fuel consumption by the relationship Isp = 1/ for SFC in kg/ and Isp = 3600/SFC for SFC in lb/; the amount of propellant is measured either in units of mass or weight. If mass is used, specific impulse is an impulse per unit mass, which dimensional analysis shows to have units of speed, so specific impulses are measured in meters per second and are termed effective exhaust velocity.
However, if propellant weight is used, an impulse divided by a force turns out to be a unit of time, so specific impulses are measured in seconds. These two formulations are both used and differ from each other by a factor of g0, the dimensioned constant of gravitational acceleration at the surface of the Earth. Note that the rate of change of momentum of a rocket per unit time is equal to the thrust; the higher the specific impulse, the less propellant is needed to produce a given thrust for a given time. In this regard a propellant is more efficient the greater its specific impulse; this should not be confused with energy efficiency, which can decrease as specific impulse increases, since propulsion systems that give high specific impulse require high energy to do so. Thrust and specific impulse should not be confused; the specific impulse is the impulse produced per unit of propellant expended, while thrust is the momentary or peak force supplied by a particular engine. In many cases, propulsion systems with high specific impulse—some ion thrusters reach 10,000 seconds—produce low thrust.
When calculating specific impulse, only propellant carried with the vehicle. For a chemical rocket, the propellant mass therefore would include both oxidizer. For air-breathing engines, only the mass of the fuel is counted, not the mass of air passing through the engine. Air resistance and the engine's inability to keep a high specific impulse at a fast burn rate are why all the propellant is not used as fast as possible. A heavier engine with a higher specific impulse may not be as effective in gaining altitude, distance, or velocity as a lighter engine with a lower specific impulse. If it were not for air resistance and the reduction of propellant during flight, specific impulse would be a direct measure of the engine's effectiveness in converting propellant weight or mass into forward momentum; the most common unit for specific impulse is the second, both in SI contexts as well as where imperial or customary units are used. The advantage of seconds is that the unit and numerical value are identical across systems of measurements, universal.
Nearly all manufacturers quote their engine performance in seconds, the unit is useful for specifying aircraft engine performance. The use of metres per second to specify effective exhaust velocity is reasonably common; the unit is intuitive when describing rocket engines, although the effective exhaust speed of the engines may be different from the actual exhaust speed, which may be due to the fuel and oxidizer, dumped overboard after powering turbopumps. For airbreathing jet engines, the effective exhaust velocity is not physically meaningful, although it can be used for comparison purposes; the values expressed in N·s/kg are not uncommon and are numerically equal to the effective exhaust velocity in m/s. Specific fuel consumption is inversely proportional to specific impulse and has units of g/ or lb/. Specific fuel consumption is used extensively for describing the performance of air-breathing jet engines; the curious unit of seconds to measure the'goodness' of a fuel/engine combination can be thought of as "How many seconds this propellant can accelerate its own initial mass at 1 gee".
The more seconds it can accelerate its own mass, the more delta-V it delivers to the whole system. For all vehicles, specific impulse (impulse per unit weight-on-Earth of prope
A magnetic field is a vector field that describes the magnetic influence of electric charges in relative motion and magnetized materials. Magnetic fields are observed from subatomic particles to galaxies. In everyday life, the effects of magnetic fields are seen in permanent magnets, which pull on magnetic materials and attract or repel other magnets. Magnetic fields surround and are created by magnetized material and by moving electric charges such as those used in electromagnets. Magnetic fields exert forces on nearby moving electrical torques on nearby magnets. In addition, a magnetic field that varies with location exerts a force on magnetic materials. Both the strength and direction of a magnetic field vary with location; as such, it is an example of a vector field. The term'magnetic field' is used for two distinct but related fields denoted by the symbols B and H. In the International System of Units, H, magnetic field strength, is measured in the SI base units of ampere per meter. B, magnetic flux density, is measured in tesla, equivalent to newton per meter per ampere.
H and B differ in. In a vacuum, B and H are the same aside from units. Magnetic fields are produced by moving electric charges and the intrinsic magnetic moments of elementary particles associated with a fundamental quantum property, their spin. Magnetic fields and electric fields are interrelated, are both components of the electromagnetic force, one of the four fundamental forces of nature. Magnetic fields are used throughout modern technology in electrical engineering and electromechanics. Rotating magnetic fields are used in both electric generators; the interaction of magnetic fields in electric devices such as transformers is studied in the discipline of magnetic circuits. Magnetic forces give information about the charge carriers in a material through the Hall effect; the Earth produces its own magnetic field, which shields the Earth's ozone layer from the solar wind and is important in navigation using a compass. Although magnets and magnetism were studied much earlier, the research of magnetic fields began in 1269 when French scholar Petrus Peregrinus de Maricourt mapped out the magnetic field on the surface of a spherical magnet using iron needles.
Noting that the resulting field lines crossed at two points he named those points'poles' in analogy to Earth's poles. He clearly articulated the principle that magnets always have both a north and south pole, no matter how finely one slices them. Three centuries William Gilbert of Colchester replicated Petrus Peregrinus' work and was the first to state explicitly that Earth is a magnet. Published in 1600, Gilbert's work, De Magnete, helped to establish magnetism as a science. In 1750, John Michell stated that magnetic poles attract and repel in accordance with an inverse square law. Charles-Augustin de Coulomb experimentally verified this in 1785 and stated explicitly that the north and south poles cannot be separated. Building on this force between poles, Siméon Denis Poisson created the first successful model of the magnetic field, which he presented in 1824. In this model, a magnetic H-field is produced by'magnetic poles' and magnetism is due to small pairs of north/south magnetic poles. Three discoveries in 1820 challenged this foundation of magnetism, though.
Hans Christian Ørsted demonstrated that a current-carrying wire is surrounded by a circular magnetic field. André-Marie Ampère showed that parallel wires with currents attract one another if the currents are in the same direction and repel if they are in opposite directions. Jean-Baptiste Biot and Félix Savart announced empirical results about the forces that a current-carrying long, straight wire exerted on a small magnet, determining that the forces were inversely proportional to the perpendicular distance from the wire to the magnet. Laplace deduced, but did not publish, a law of force based on the differential action of a differential section of the wire, which became known as the Biot–Savart law. Extending these experiments, Ampère published his own successful model of magnetism in 1825. In it, he showed the equivalence of electrical currents to magnets and proposed that magnetism is due to perpetually flowing loops of current instead of the dipoles of magnetic charge in Poisson's model.
This has the additional benefit of explaining. Further, Ampère derived both Ampère's force law describing the force between two currents and Ampère's law, like the Biot–Savart law described the magnetic field generated by a steady current. In this work, Ampère introduced the term electrodynamics to describe the relationship between electricity and magnetism. In 1831, Michael Faraday discovered electromagnetic induction when he found that a changing magnetic field generates an encircling electric field, he described this phenomenon in. Franz Ernst Neumann proved that, for a moving conductor in a magnetic field, induction is a consequence of Ampère's force law. In the process, he introduced the magnetic vector potential, shown to be equivalent to the underlying mechanism proposed by Faraday. In 1850, Lord Kelvin known as William Thomson, distinguished between two magnetic fields now denoted H and B; the former applied to the latter to Ampère's model and induction. Further, he derived how H and B relate to each other