Ship gun fire-control system
Ship gun fire-control systems are fire-control systems to enable remote and automatic targeting of guns against surface ships and shore targets, with either optical or radar sighting. Most US ships that are destroyers or larger employed GFCS for 5-inch and larger guns, up to battleships, such as the USS Iowa. Beginning with ships built in the 1960s, GFCSs were integrated with missile fire-control systems and other ship sensors; the major components of a GFCS are a manned director, with or replaced by radar or television camera, a computer, stabilizing device or gyro, equipment in a plotting roomFor the USN, the most prevalent gunnery computer was the Ford Mark 1 the Mark 1A Fire Control Computer, an electro-mechanical analog ballistic computer that provided accurate firing solutions and could automatically control one or more gun mounts against stationary or moving targets on the surface or in the air. This gave American forces a technological advantage in World War II against the Japanese who did not develop Remote Power Control for their guns.
Digital computers would not be adopted for this purpose by the US until the mid-1970s. The MK 37 Gun Fire Control System incorporated the Mk 1 computer, the Mk 37 director, a gyroscopic stable element along with automatic gun control, was the first USN dual purpose GFCS to separate the computer from the director. Naval fire control resembles that of ground-based guns, but with no sharp distinction between direct and indirect fire, it is possible to control several same-type guns on a single platform while both the firing guns and the target are moving. Though a ship rolls and pitches at a slower rate than a tank does, gyroscopic stabilization is desirable. Naval gun fire control involves three levels of complexity: Local control originated with primitive gun installations aimed by the individual gun crews; the director system of fire control was pioneered by British Royal Navy in 1912. All guns on a single ship were laid from a central position placed as high as possible above the bridge; the director became a design feature of battleships, with Japanese "Pagoda-style" masts designed to maximize the view of the director over long ranges.
A fire control officer who ranged the salvos transmitted angles to individual guns. Coordinated gunfire from a formation of ships at a single target was a focus of battleship fleet operations. An officer on the flagship would signal target information to other ships in the formation; this was necessary to exploit the tactical advantage when one fleet succeeded in crossing the others T, but the difficulty of distinguishing the splashes made walking the rounds in on the target more difficult. Corrections can be made for surface wind velocity, firing ship roll and pitch, powder magazine temperature, drift of rifled projectiles, individual gun bore diameter adjusted for shot-to-shot enlargement, rate of change of range with additional modifications to the firing solution based upon the observation of preceding shots. More sophisticated fire control systems consider more of these factors rather than relying on simple correction of observed fall of shot. Differently colored dye markers were sometimes included with large shells so individual guns, or individual ships in formation, could distinguish their shell splashes during daylight.
Early "computers" were people using numerical tables. Centralized naval fire control systems were first developed around the time of World War I. Local control had been used up until that time, remained in use on smaller warships and auxiliaries through World War II, it may still be used for machine guns aboard patrol craft. Beginning with the British battleship HMS Dreadnought, large warships had at least six similar big guns, which facilitated central fire control. For the UK, their first central system was built before the Great War. At the heart was an analogue computer designed by Commander Frederic Charles Dreyer that calculated rate of change of range; the Dreyer Table was to be improved and served into the interwar period at which point it was superseded in new and reconstructed ships by the Admiralty Fire Control Table. The use of Director-controlled firing together with the fire control computer moved the control of the gun laying from the individual turrets to a central position, although individual gun mounts and multi-gun turrets may retain a local control option for use when battle damage limits Director information transfer.
Guns could be fired in planned salvos, with each gun giving a different trajectory. Dispersion of shot caused by differences in individual guns, individual projectiles, powder ignition sequences, transient distortion of ship structure was undesirably large at typical naval engagement ranges. Directors high on the superstructure had a better view of the enemy than a turret mounted sight, the crew operating it were distant from the sound and shock of the guns. Unmeasured and uncontrollable ballistic factors like high altitude temperature, barometric pressure, wind direction and velocity required final adjustment through observation of fall of shot. Visual range measurement was difficult prior to availability of radar; the British favoured coincidence rangefinders while the Germans and the U. S. Navy, stereoscopic type; the former were less a
Residual gas analyzer
A residual gas analyzer is a small and rugged mass spectrometer designed for process control and contamination monitoring in vacuum systems. Utilizing quadrupole technology, there exists two implementations, utilizing either an open ion source or a closed ion source. RGAs may be found in high vacuum applications such as research chambers, surface science setups, scanning microscopes, etc. RGAs are used in most cases to monitor the quality of the vacuum and detect minute traces of impurities in the low-pressure gas environment; these impurities can be measured down to 10 − 14 Torr levels, possessing sub-ppm detectability in the absence of background interferences. RGAs would be used as sensitive in-situ leak detectors using helium, isopropyl alcohol or other tracer molecules. With vacuum systems pumped down to lower than 10 − 5 Torr—checking of the integrity of the vacuum seals and the quality of the vacuum—air leaks, virtual leaks and other contaminants at low levels may be detected before a process is initiated.
OIS is the most available type of RGA. Residual Gas Analyzers measure pressure by sensing the weight of each atom as they pass through the quadrupole. Cylindrical and axially symmetrical, this kind of ionizer has been around since the early 1950s; the OIS type is mounted directly to the vacuum chamber, exposing the filament wire and anode wire cage to the surrounding vacuum chamber, allowing all molecules in the vacuum chamber to move through the ion source. With a maximum operating pressure of 10 − 4 Torr and a minimum detectable partial pressure as low as 10 − 14 Torr when used in tandem with an electron multiplier. OIS RGAs measure residual gas levels without affecting the gas composition of their vacuum environment, though there are performance limitations which include: Outgassing of water from the chamber, H 2 from the OIS electrodes and most varieties of 300-series stainless steel used in the surrounding vacuum chamber due to the high temperatures of the hot-cathode source. Electron Stimulated Desorption is noted by peaks observed at 12, 16, 19 and 35 u rather than by electron-impact ionization of gaseous species, with the effects similar to outgassing effects.
This is counteracted by gold-plating the ionizer which in turn reduces the adsorption of many gases. Using platinum-clad molybdenum ionizers is an alternative. With applications requiring measurement of pressures between 10 − 4 and 10 − 3 Torr, the problem of ambient and process gases can be reduced by replacing the OIS configuration with a CIS sampling system; such an ionizer sits on top of the quadrupole mass filter and consists of a short, gas-tight tube with two openings for the entrance of electrons and exit of ions. The ions are formed exit the ionizer. Electrically insulated alumina rings seal the tube and the biased electrodes from the rest of the quadrupole mass assembly; the ions are produced by electron impact directly at the process pressure. Such design has been applied to gas chromatography mass spectrometry instruments before adaption by quadrupole gas analyzers. Most commercially available CIS systems operate between 10 − 2 and 10 − 11 Torr and offer ppm level detectability over the entire mass range for process pressures between 10 − 4 and 10 − 2 Torr.
The upper limit is set by reduction in mean free path for ion-neutral collisions which takes place at higher pressures, results in the scattering of ions and reduced sensitivity. The CIS anode may be viewed as a high conductance tube connected directly to the process chamber; the pressure in the ionization area is the same as the rest of the chamber. Thus the CIS ionizer produces ions by electron impact directly at the process pressure whilst the rest of the mass analyzer is kept under high vacuum; such direct sampling provides good sensitivity and fast response times. A reference describing threshold ionisation with RGA
Numerical weather prediction
Numerical weather prediction uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs. Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the improvements made to regional models have allowed for significant improvements in tropical cyclone track and air quality forecasts. Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. With the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days.
Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics have been developed to improve the handling of errors in numerical predictions. A more fundamental problem lies in the chaotic nature of the partial differential equations that govern the atmosphere, it is impossible to solve these equations and small errors grow with time. Present understanding is that this chaotic behavior limits accurate forecasts to about 14 days with accurate input data and a flawless model. In addition, the partial differential equations used in the model need to be supplemented with parameterizations for solar radiation, moist processes, heat exchange, vegetation, surface water, the effects of terrain. In an effort to quantify the large amount of inherent uncertainty remaining in numerical predictions, ensemble forecasts have been used since the 1990s to help gauge the confidence in the forecast, to obtain useful results farther into the future than otherwise possible.
This approach analyzes multiple forecasts created with an individual forecast model or multiple models. The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson, who used procedures developed by Vilhelm Bjerknes to produce by hand a six-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so, it was not until the advent of the computer and computer simulations that computation time was reduced to less than the forecast period itself. The ENIAC was used to create the first weather forecasts via computer in 1950, based on a simplified approximation to the atmospheric governing equations. In 1954, Carl-Gustav Rossby's group at the Swedish Meteorological and Hydrological Institute used the same model to produce the first operational forecast. Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit, a joint project by the U.
S. Air Force and Weather Bureau. In 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere. Following Phillips' work, several groups began working to create general circulation models; the first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. As computers have become more powerful, the size of the initial data sets has increased and newer atmospheric models have been developed to take advantage of the added available computing power; these newer models include more physical processes in the simplifications of the equations of motion in numerical simulations of the atmosphere. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977; the development of limited area models facilitated advances in forecasting the tracks of tropical cyclones as well as air quality in the 1970s and 1980s.
By the early 1980s models began to include the interactions of soil and vegetation with the atmosphere, which led to more realistic forecasts. The output of forecast models based on atmospheric dynamics is unable to resolve some details of the weather near the Earth's surface; as such, a statistical relationship between the output of a numerical weather model and the ensuing conditions at the ground was developed in the 1970s and 1980s, known as model output statistics. Starting in the 1990s, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible; the atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future; the process of entering observation data into the model to generate initial conditions is called initialization.
On land, terrain maps available at resolutions dow
An ion source is a device that creates atomic and molecular ions. Ion sources are used to form ions for mass spectrometers, optical emission spectrometers, particle accelerators, ion implanters and ion engines. Electron ionization is used in mass spectrometry for organic molecules; the gas phase reaction producing electron ionization is M + e − ⟶ M + ∙ + 2 e − where M is the atom or molecule being ionized, e − is the electron, M + ∙ is the resulting ion. The electrons may be created by an arc discharge between an anode. An electron beam ion source is used in atomic physics to produce charged ions by bombarding atoms with a powerful electron beam, its principle of operation is shared by the electron beam ion trap. Electron capture ionization is the ionization of a gas phase atom or molecule by attachment of an electron to create an ion of the form A−•; the reaction is A + e − → M A − where the M over the arrow denotes that to conserve energy and momentum a third body is required. Electron capture can be used in conjunction with chemical ionization.
An electron capture detector is used in some gas chromatography systems. Chemical ionization is a lower energy process than electron ionization because it involves ion/molecule reactions rather than electron removal; the lower energy yields less fragmentation, a simpler spectrum. A typical CI spectrum has an identifiable molecular ion. In a CI experiment, ions are produced through the collision of the analyte with ions of a reagent gas in the ion source; some common reagent gases include: methane and isobutane. Inside the ion source, the reagent gas is present in large excess compared to the analyte. Electrons entering the source will preferentially ionize the reagent gas; the resultant collisions with other reagent gas molecules will create an ionization plasma. Positive and negative ions of the analyte are formed by reactions with this plasma. For example, protonation occurs by CH 4 + e − ⟶ CH 4 + + 2 e −, CH 4 + CH 4 + ⟶ CH 5 + + CH 3, M + CH 5 + ⟶ CH 4 + +. Charge-exchange ionization is a gas phase reaction between an ion and an atom or molecule in which the charge of the ion is transferred to the neutral species.
A + + B ⟶ A + B + Chemi-ionization is the formation of an ion through the reaction of a gas phase atom or molecule with an atom or molecule in an excited state. Chemi-ionization can be represented by G ∗ + M ⟶ M + ∙ + e − + G where G is the excited state species, M is the species, ionized by the loss of an electron to form the radical cation. Associative ionization is a gas phase reaction in which two atoms or molecules interact to form a single product ion. One or both of the interacting species may have excess internal energy. For example, A ∗ + B ⟶ AB + ∙ + e − where species A with excess internal energy interacts with B to form the ion AB+. Penning ionization is a form of chemi-ionization involving reactions between neutral atoms or molecules; the process is named after the Dutch physicist Frans Michel Penning who first reported it in 1927. Penning ionization involves a reaction between a gas-phase excited-state atom or molecule G* and a target molecule M resulting in the formation of a radical molecular cation M+. an electron e−, a neutral gas molecule G: G ∗ + M ⟶ M + ∙ + e − + G Penning ionization occurs when the target molecule has an ionization potential lower than the internal energy of the excited-state atom or molecule.
Associative Penning ionization can proceed via G ∗ + M ⟶ MG +
Architectural acoustics is the science and engineering of achieving a good sound within a building and is a branch of acoustical engineering. The first application of modern scientific methods to architectural acoustics was carried out by Wallace Sabine in the Fogg Museum lecture room who applied his new found knowledge to the design of Symphony Hall, Boston. Architectural acoustics can be about achieving good speech intelligibility in a theatre, restaurant or railway station, enhancing the quality of music in a concert hall or recording studio, or suppressing noise to make offices and homes more productive and pleasant places to work and live in. Architectural acoustic design is done by acoustic consultants; this science analyzes noise transmission from building exterior envelope to vice versa. The main noise paths are roofs, walls, windows and penetrations. Sufficient control ensures space functionality and is required based on building use and local municipal codes. An example would be providing a suitable design for a home, to be constructed close to a high volume roadway, or under the flight path of a major airport, or of the airport itself.
The science of limiting and/or controlling noise transmission from one building space to another to ensure space functionality and speech privacy. The typical sound paths are ceilings, room partitions, acoustic ceiling panels, windows, flanking and other penetrations. Technical solutions depend on the source of the noise and the path of acoustic transmission, for example noise by steps or noise by flow vibrations. An example would be providing suitable party wall design in an apartment complex to minimize the mutual disturbance due to noise by residents in adjacent apartments; this is the science of controlling a room's surfaces based on sound absorbing and reflecting properties. Excessive reverberation time, which can be calculated, can lead to poor speech intelligibility. Sound reflections create standing waves that produce natural resonances that can be heard as a pleasant sensation or an annoying one. Reflective surfaces can be angled and coordinated to provide good coverage of sound for a listener in a concert hall or music recital space.
To illustrate this concept consider the difference between a modern large office meeting room or lecture theater and a traditional classroom with all hard surfaces. Interior building surfaces finishes. Ideal acoustical panels are those without a face or finish material that interferes with the acoustical infill or substrate. Fabric covered. Perforated metal shows sound absorbing qualities. Finish material is used to cover over the acoustical substrate. Mineral fiber board, or Micore, is a used acoustical substrate. Finish materials consist of fabric, wood or acoustical tile. Fabric can be wrapped around substrates to create what is referred to as a "pre-fabricated panel" and provides good noise absorption if laid onto a wall. Prefabricated panels are limited to the size of the substrate ranging from 2 by 4 feet to 4 by 10 feet. Fabric retained in a wall-mounted perimeter track system, is referred to as "on-site acoustical wall panels"; this is constructed by framing the perimeter track into shape, infilling the acoustical substrate and stretching and tucking the fabric into the perimeter frame system.
On-site wall panels can be constructed to accommodate door frames, baseboard, or any other intrusion. Large panels can be created on ceilings with this method. Wood finishes can consist of punched or routed slots and provide a natural look to the interior space, although acoustical absorption may not be great. There are four ways to solve workplace sound problems -- the ABCDs. A = Absorb B = Block C = Cover-up D = Diffuse Building services noise control is the science of controlling noise produced by: ACMV systems in buildings, termed HVAC in North America Elevators Electrical generators positioned within or attached to a building Any other building service infrastructure component that emits sound. Inadequate control may lead to elevated sound levels within the space which can be annoying and reduce speech intelligibility. Typical improvements are vibration isolation of mechanical equipment, sound traps in ductwork. Sound masking can be created by adjusting HVAC noise to a predetermined level.
Acoustic transmission Noise health effects Noise mitigation Noise Reduction Coefficient Noise regulation Noise and harshness Room acoustics Sound transmission class Wallace Clement Sabine Acoustical Society of America American Institute of Architects National Council of Acoustical Consultants Institute of Acoustics Speech Privacy Calculator Optimum sizes for small rooms Concert Hall Acoustics Everything You Always Wanted to Know About Concert Hall Acoustics An on-line version of an exhibition on concert hall acoustics shown at the South Bank Centre, London
In electronics, a differentiator is a circuit, designed such that the output of the circuit is directly proportional to the rate of change of the input. An active differentiator includes some form of amplifier. A passive differentiator circuit is made of only capacitors. A true differentiator cannot be physically realized, because it has infinite gain at infinite frequency. A similar effect can be achieved, however, by limiting the gain above some frequency. Therefore, a passive differentiator circuit can be made using a simple first-order high-pass filter, with the cut-off frequency set to be far above the highest frequency in the signal; this is a four-terminal network consisting of two passive elements as shown in figures 1 and 2. The analysis here is for the capacitive circuit in figure 1; the inductive case in figure 2 can be handled in a similar way. The transfer function shows the dependence of the network gain on the signal frequency for sinusoidal signals. According to Ohm's law, Y = X Z R Z R + Z C = X R R + 1 j ω C = X 1 1 + 1 j ω R C, where X and Y are input and output signals' amplitudes and Z R and Z C are the resistor's and capacitor's impedances.
Therefore, the complex transfer function is K = 1 1 + 1 j ω R C = 1 1 + ω 0 j ω, where ω 0 = 1 R C. The amplitude transfer function H ≜ | K | = 1 1 + 2, the phase transfer function ϕ ≜ arg K = arctan ω 0 ω, which are both shown in Figure 3. Transfer functions for the second circuit are the same; the circuit's impulse response, shown in figure 4, can be derived as an inverse Laplace transform of the complex transfer function: h = L − 1 = δ − ω 0 e − ω 0 t = δ − 1 τ e − t τ where τ = 1 ω 0 is a time constant, δ is a delta function. A differentiator circuit consists of an operational amplifier in which a resistor R provides negative feedback and a capacitor is used at the input side; the circuit is based on the capacitor's current to voltage relationship V = V + e − t τ, I = C d V d t, where I is the current through the capacitor, C is the capacitance of the capacitor, V is the voltage across the capacitor. The current flowing through the capacitor is proportional to the derivative of the voltage across the capacitor.
This current can be connected to a resistor, which has the current to voltage relationship I = V R, where R is the resistance of the r
A noise barrier is an exterior structure designed to protect inhabitants of sensitive land use areas from noise pollution. Noise barriers are the most effective method of mitigating roadway and industrial noise sources – other than cessation of the source activity or use of source controls. In the case of surface transportation noise, other methods of reducing the source noise intensity include encouraging the use of hybrid and electric vehicles, improving automobile aerodynamics and tire design, choosing low-noise paving material. Extensive use of noise barriers began in the United States after noise regulations were introduced in the early 1970s. Noise barriers have been built in the United States since the mid-twentieth century, when vehicular traffic burgeoned. I-680 in Milpitas, California was the first noise barrier. In the late 1960s, analytic acoustical technology emerged to mathematically evaluate the efficacy of a noise barrier design adjacent to a specific roadway. By the 1990s, noise barriers that included use of transparent materials were being designed in Denmark and other western European countries.
Below, a researcher collects data to calibrate a roadway noise model for Foothill Expressway. The best of these early computer models considered the effects of roadway geometry, vehicle volumes, vehicle speeds, truck mix, road surface type, micro-meteorology. Several U. S. research groups developed variations of the computer modeling techniques: Caltrans Headquarters in Sacramento, California. The earliest published work that scientifically designed a specific noise barrier was the study for the Foothill Expressway in Los Altos, California. Numerous case studies across the U. S. soon addressed dozens of planned highways. Most were commissioned by state highway departments and conducted by one of the four research groups mentioned above; the U. S. National Environmental Policy Act mandated the quantitative analysis of noise pollution from every Federal-Aid Highway Act Project in the country, propelling noise barrier model development and application. With passage of the Noise Control Act of 1972, demand for noise barrier design soared from a host of noise regulation spinoff.
By the late 1970s, more than a dozen research groups in the U. S. were applying similar computer modeling technology and addressing at least 200 different locations for noise barriers each year. As of 2006, this technology is considered a standard in the evaluation of noise pollution from highways; the nature and accuracy of the computer models used is nearly identical to the original 1970s versions of the technology. The acoustical science of noise barrier design is based upon treating an airway or railway as a line source; the theory is based upon blockage of sound ray travel toward a particular receptor. Sound waves bend. Barriers that block line of sight of a highway or other source will therefore block more sound. Further complicating matters is the phenomenon of refraction, the bending of sound rays in the presence of an inhomogeneous atmosphere. Wind shear and thermocline produce such inhomogeneities; the sound sources modeled must include engine noise, tire noise, aerodynamic noise, all of which vary by vehicle type and speed.
The noise barrier may be constructed on private land, on a public right-of-way, or on other public land. Because sound levels are measured using a logarithmic scale, a reduction of nine decibels is equivalent to elimination of 86 percent of the unwanted sound power. Several different materials may be used for sound barriers; these materials can include masonry, steel, wood, insulating wool, or composites. Walls that are made of absorptive material mitigate sound differently than hard surfaces, it is now possible to make noise barriers with active materials such as solar photovoltaic panels to generate electricity while reducing traffic noise. A wall with porous surface material and sound-dampening content material can be absorptive where little or no noise is reflected back towards the source or elsewhere. Hard surfaces such as masonry or concrete are considered to be reflective where most of the noise is reflected back towards the noise source and beyond. Noise barriers can be effective tools for noise pollution abatement, but certain locations and topographies are not suitable for use of noise barriers.
Cost and aesthetics play a role in the choice of noise barriers. In some cases, a roadway is surrounded by a noise abatement structure or dug into a tunnel using the cut-and-cover method. Potential disadvantages of noise barriers include: Aesthetic impacts for motorists and neighbors if scenic vistas are blocked Costs of design and maintenance Necessity to design custom drainage that the barrier may interruptNormally, the benefits of noise reduction far outweigh aesthetic impacts for residents protected from unwanted sound; these benefits include lessened sleep disturbance, improved ability to enjoy outdoor life, reduced speech interference, stress reduction, reduced risk of hearing impairment, a reduction in the elevated blood pressure created by noise. Sound barrier walls vary in cost depending on the quality. Concrete is popular due to lower cost. Since they are reflective, they could create noise for those across from the barrier. Absorptive barriers absorb, and