Disk storage is a general category of storage mechanisms where data is recorded by various electronic, optical, or mechanical changes to a surface layer of one or more rotating disks. A disk drive is a device implementing such a storage mechanism. Notable types are the hard disk drive containing a non-removable disk, the floppy disk drive and its removable floppy disk, various optical disc drives and associated optical disc media.. Audio information was recorded by analog methods; the first video disc used an analog recording method. In the music industry, analog recording has been replaced by digital optical technology where the data is recorded in a digital format with optical information; the first commercial digital disk storage device was the IBM 350 which shipped in 1956 as a part of the IBM 305 RAMAC computing system. The random-access, low-density storage of disks was developed to complement the used sequential-access, high-density storage provided by tape drives using magnetic tape. Vigorous innovation in disk storage technology, coupled with less vigorous innovation in tape storage, has reduced the difference in acquisition cost per terabyte between disk storage and tape storage.
Disk storage is now used in both computer storage and consumer electronic storage, e.g. audio CDs and video discs. Data on modern disks is stored in fixed length blocks called sectors and varying in length from a few hundred to many thousands of bytes. Gross disk drive capacity is the number of disk surfaces times the number of blocks/surface times the number of bytes/block. In certain legacy IBM CKD drives the data was stored on magnetic disks with variable length blocks, called records. Capacity decreased. Digital disk drives are block storage devices; each disk is divided into logical blocks. Blocks are addressed using their logical block addresses. Read from or writing to disk happens at the granularity of blocks; the disk capacity was quite low and has been improved in one of several ways. Improvements in mechanical design and manufacture allowed smaller and more precise heads, meaning that more tracks could be stored on each of the disks. Advancements in data compression methods permitted more information to be stored in each of the individual sectors.
The drive stores data onto cylinders and sectors. The sectors unit is the smallest size of data to be stored in a hard disk drive and each file will have many sectors units assigned to it; the smallest entity in a CD is called a frame, which consists of 33 bytes and contains six complete 16-bit stereo samples. The other nine bytes consist of eight CIRC error-correction bytes and one subcode byte used for control and display; the information is sent from the computer processor to the BIOS into a chip controlling the data transfer. This is sent out to the hard drive via a multi-wire connector. Once the data is received onto the circuit board of the drive, they are translated and compressed into a format that the individual drive can use to store onto the disk itself; the data is passed to a chip on the circuit board that controls the access to the drive. The drive is divided into sectors of data stored onto one of the sides of one of the internal disks. An HDD with two disks internally will store data on all four surfaces.
The hardware on the drive tells the actuator arm where it is to go for the relevant track and the compressed information is sent down to the head which changes the physical properties, optically or magnetically for example, of each byte on the drive, thus storing the information. A file is not stored in a linear manner, rather, it is held in the best way for quickest retrieval. Mechanically there are two different motions occurring inside the drive. One is the rotation of the disks inside the device; the other is the side-to-side motion of the head across the disk. There are two types of disk rotation methods: constant linear velocity varies the rotational speed of the optical disc depending upon the position of the head, constant angular velocity spins the media at one constant speed regardless of where the head is positioned. Track positioning follows two different methods across disk storage devices. Storage devices focused on holding computer data, e.g. HDDs, FDDs, Iomega zip drives, use concentric tracks to store data.
During a sequential read or write operation, after the drive accesses all the sectors in a track it repositions the head to the next track. This will cause a momentary delay in the flow of data between the computer. In contrast, optical audio and video discs use a single spiral track that starts at the inner most point on the disc and flows continuously to the outer edge; when reading or writing data there is no need to stop the flow of data to switch tracks. This is similar to vinyl records except vinyl records started at the outer edge and spiraled in toward the center; the disk drive interface is the mechanism/protocol of communicat
A proximity sensor is a sensor able to detect the presence of nearby objects without any physical contact. A proximity sensor emits an electromagnetic field or a beam of electromagnetic radiation, looks for changes in the field or return signal; the object being sensed is referred to as the proximity sensor's target. Different proximity sensor targets demand different sensors. For example, a capacitive proximity sensor or photoelectric sensor might be suitable for a plastic target. Proximity sensors can have a high reliability and long functional life because of the absence of mechanical parts and lack of physical contact between the sensor and the sensed object. Proximity sensors are used in machine vibration monitoring to measure the variation in distance between a shaft and its support bearing; this is common in large steam turbines and motors that use sleeve-type bearings. International Electrotechnical Commission 60947-5-2 defines the technical details of proximity sensors. A proximity sensor adjusted to a short range is used as a touch switch.
Proximity sensors are used on mobile devices. When the target is within nominal range, the device lock screen user interface will appear, thus emerging from what is known as sleep mode. Once the device has awoken from sleep mode, if the proximity sensor's target is still for an extended period of time, the sensor will ignore it, the device will revert into sleep mode. For example, during a telephone call, proximity sensors play a role in detecting accidental touchscreen taps when mobiles are held to the ear. Proximity sensors can be used to recognise air hover-manipulations. An array of proximity sensing elements can replace vision-camera or depth camera based solutions for the hand gesture detection. Capacitive Capacitive displacement sensor Doppler effect Eddy-current Inductive Magnetic, including magnetic proximity fuse Optical Photoelectric Photocell Laser rangefinder Passive Passive thermal infrared Radar Reflection of ionizing radiation Sonar Ultrasonic sensor Fiber optics sensor Hall effect sensor Parking sensors, systems mounted on car bumpers that sense distance to nearby cars for parking Ground proximity warning system for aviation safety Vibration measurements of rotating shafts in machinery Top dead centre /camshaft sensor in reciprocating engines.
Sheet break sensing in paper machine. Anti-aircraft warfare Roller coasters Conveyor systems Beverage and food can making lines Mobile devices Touch screens that come in close proximity to the face Attenuating radio power in close proximity to the body, in order to reduce radiation exposure Automatic faucets Motion detector Occupancy sensor
An electron microscope is a microscope that uses a beam of accelerated electrons as a source of illumination. As the wavelength of an electron can be up to 100,000 times shorter than that of visible light photons, electron microscopes have a higher resolving power than light microscopes and can reveal the structure of smaller objects. A scanning transmission electron microscope has achieved better than 50 pm resolution in annular dark-field imaging mode and magnifications of up to about 10,000,000x whereas most light microscopes are limited by diffraction to about 200 nm resolution and useful magnifications below 2000x. Electron microscopes have electron optical lens systems that are analogous to the glass lenses of an optical light microscope. Electron microscopes are used to investigate the ultrastructure of a wide range of biological and inorganic specimens including microorganisms, large molecules, biopsy samples and crystals. Industrially, electron microscopes are used for quality control and failure analysis.
Modern electron microscopes produce electron micrographs using specialized digital cameras and frame grabbers to capture the images. In 1926 Hans Busch developed the electromagnetic lens. According to Dennis Gabor, the physicist Leó Szilárd tried in 1928 to convince him to build an electron microscope, for which he had filed a patent; the first prototype electron microscope, capable of four-hundred-power magnification, was developed in 1931 by the physicist Ernst Ruska and the electrical engineer Max Knoll. The apparatus was the first practical demonstration of the principles of electron microscopy. In May of the same year, Reinhold Rudenberg, the scientific director of Siemens-Schuckertwerke, obtained a patent for an electron microscope. In 1932, Ernst Lubcke of Siemens & Halske built and obtained images from a prototype electron microscope, applying the concepts described in Rudenberg's patent. In the following year, 1933, Ruska built the first electron microscope that exceeded the resolution attainable with an optical microscope.
Four years in 1937, Siemens financed the work of Ernst Ruska and Bodo von Borries, employed Helmut Ruska, Ernst's brother, to develop applications for the microscope with biological specimens. In 1937, Manfred von Ardenne pioneered the scanning electron microscope. Siemens produced the first commercial electron microscope in 1938; the first North American electron microscope was constructed in 1938, at the University of Toronto, by Eli Franklin Burton and students Cecil Hall, James Hillier, Albert Prebus. Siemens produced a transmission electron microscope in 1939. Although current transmission electron microscopes are capable of two million-power magnification, as scientific instruments, they remain based upon Ruska’s prototype; the original form of the electron microscope, the transmission electron microscope, uses a high voltage electron beam to illuminate the specimen and create an image. The electron beam is produced by an electron gun fitted with a tungsten filament cathode as the electron source.
The electron beam is accelerated by an anode at +100 keV with respect to the cathode, focused by electrostatic and electromagnetic lenses, transmitted through the specimen, in part transparent to electrons and in part scatters them out of the beam. When it emerges from the specimen, the electron beam carries information about the structure of the specimen, magnified by the objective lens system of the microscope; the spatial variation in this information may be viewed by projecting the magnified electron image onto a fluorescent viewing screen coated with a phosphor or scintillator material such as zinc sulfide. Alternatively, the image can be photographically recorded by exposing a photographic film or plate directly to the electron beam, or a high-resolution phosphor may be coupled by means of a lens optical system or a fibre optic light-guide to the sensor of a digital camera; the image detected by the digital camera may be displayed on a computer. The resolution of TEMs is limited by spherical aberration, but a new generation of hardware correctors can reduce spherical aberration to increase the resolution in high-resolution transmission electron microscopy to below 0.5 angstrom, enabling magnifications above 50 million times.
The ability of HRTEM to determine the positions of atoms within materials is useful for nano-technologies research and development. Transmission electron microscopes are used in electron diffraction mode; the advantages of electron diffraction over X-ray crystallography are that the specimen need not be a single crystal or a polycrystalline powder, that the Fourier transform reconstruction of the object's magnified structure occurs physically and thus avoids the need for solving the phase problem faced by the X-ray crystallographers after obtaining their X-ray diffraction patterns. One major disadvantage of the transmission electron microscope is the need for thin sections of the specimens about 100 nanometers. Creating these thin sections for biological and materials specimens is technically challenging. Semiconductor thin sections can be made using a focused ion beam. Biological tissue specimens are chemically fixed and embedded in a polymer resin to stabilize them sufficiently to allow ultrathin sectioning.
Sections of biological specimens, organic polymers, similar materials may require staining with heavy atom labels in order to achieve the required image contrast. One application of TEM is serial-section electron microscopy, for example in analyzing the connectivity in volumetric samples of brain tissue by imaging many thin sections in sequence; the SEM produces imag
Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events. The scope and application of measurement are dependent on the discipline. In the natural sciences and engineering, measurements do not apply to nominal properties of objects or events, consistent with the guidelines of the International vocabulary of metrology published by the International Bureau of Weights and Measures. However, in other fields such as statistics as well as the social and behavioral sciences, measurements can have multiple levels, which would include nominal, ordinal and ratio scales. Measurement is a cornerstone of trade, science and quantitative research in many disciplines. Many measurement systems existed for the varied fields of human existence to facilitate comparisons in these fields; these were achieved by local agreements between trading partners or collaborators. Since the 18th century, developments progressed towards unifying accepted standards that resulted in the modern International System of Units.
This system reduces all physical measurements to a mathematical combination of seven base units. The science of measurement is pursued in the field of metrology; the measurement of a property may be categorized by the following criteria: type, magnitude and uncertainty. They enable unambiguous comparisons between measurements; the level of measurement is a taxonomy for the methodological character of a comparison. For example, two states of a property may be compared by difference, or ordinal preference; the type is not explicitly expressed, but implicit in the definition of a measurement procedure. The magnitude is the numerical value of the characterization obtained with a suitably chosen measuring instrument. A unit assigns a mathematical weighting factor to the magnitude, derived as a ratio to the property of an artifact used as standard or a natural physical quantity. An uncertainty represents the systemic errors of the measurement procedure. Errors are evaluated by methodically repeating measurements and considering the accuracy and precision of the measuring instrument.
Measurements most use the International System of Units as a comparison framework. The system defines seven fundamental units: kilogram, candela, ampere and mole. Six of these units are defined without reference to a particular physical object which serves as a standard, while the kilogram is still embodied in an artifact which rests at the headquarters of the International Bureau of Weights and Measures in Sèvres near Paris. Artifact-free definitions fix measurements at an exact value related to a physical constant or other invariable phenomena in nature, in contrast to standard artifacts which are subject to deterioration or destruction. Instead, the measurement unit can only change through increased accuracy in determining the value of the constant it is tied to; the first proposal to tie an SI base unit to an experimental standard independent of fiat was by Charles Sanders Peirce, who proposed to define the metre in terms of the wavelength of a spectral line. This directly influenced the Michelson–Morley experiment.
With the exception of a few fundamental quantum constants, units of measurement are derived from historical agreements. Nothing inherent in nature dictates that an inch has to be a certain length, nor that a mile is a better measure of distance than a kilometre. Over the course of human history, first for convenience and for necessity, standards of measurement evolved so that communities would have certain common benchmarks. Laws regulating measurement were developed to prevent fraud in commerce. Units of measurement are defined on a scientific basis, overseen by governmental or independent agencies, established in international treaties, pre-eminent of, the General Conference on Weights and Measures, established in 1875 by the Metre Convention, overseeing the International System of Units and having custody of the International Prototype Kilogram; the metre, for example, was redefined in 1983 by the CGPM in terms of light speed, while in 1960 the international yard was defined by the governments of the United States, United Kingdom and South Africa as being 0.9144 metres.
In the United States, the National Institute of Standards and Technology, a division of the United States Department of Commerce, regulates commercial measurements. In the United Kingdom, the role is performed by the National Physical Laboratory, in Australia by the National Measurement Institute, in South Africa by the Council for Scientific and Industrial Research and in India the National Physical Laboratory of India. Before SI units were adopted around the world, the British systems of English units and imperial units were used in Britain, the Commonwealth and the United States; the system came to be known as U. S. is still in use there and in a few Caribbean countries. These various systems of measurement have at times been called foot-pound-second systems after the Imperial units for length and time though the tons, hundredweights and nautical miles, for example, are different for the U. S. units. Many Imperial units remain in use in Britain, which has switched to the SI system—with a few exceptions such as road signs, which are still in miles.
Draught beer and cider must be sold by the imperial pint, milk in returnable bottles can be sold by the imperial pint. Many people meas
A magnetic field is a vector field that describes the magnetic influence of electric charges in relative motion and magnetized materials. Magnetic fields are observed from subatomic particles to galaxies. In everyday life, the effects of magnetic fields are seen in permanent magnets, which pull on magnetic materials and attract or repel other magnets. Magnetic fields surround and are created by magnetized material and by moving electric charges such as those used in electromagnets. Magnetic fields exert forces on nearby moving electrical torques on nearby magnets. In addition, a magnetic field that varies with location exerts a force on magnetic materials. Both the strength and direction of a magnetic field vary with location; as such, it is an example of a vector field. The term'magnetic field' is used for two distinct but related fields denoted by the symbols B and H. In the International System of Units, H, magnetic field strength, is measured in the SI base units of ampere per meter. B, magnetic flux density, is measured in tesla, equivalent to newton per meter per ampere.
H and B differ in. In a vacuum, B and H are the same aside from units. Magnetic fields are produced by moving electric charges and the intrinsic magnetic moments of elementary particles associated with a fundamental quantum property, their spin. Magnetic fields and electric fields are interrelated, are both components of the electromagnetic force, one of the four fundamental forces of nature. Magnetic fields are used throughout modern technology in electrical engineering and electromechanics. Rotating magnetic fields are used in both electric generators; the interaction of magnetic fields in electric devices such as transformers is studied in the discipline of magnetic circuits. Magnetic forces give information about the charge carriers in a material through the Hall effect; the Earth produces its own magnetic field, which shields the Earth's ozone layer from the solar wind and is important in navigation using a compass. Although magnets and magnetism were studied much earlier, the research of magnetic fields began in 1269 when French scholar Petrus Peregrinus de Maricourt mapped out the magnetic field on the surface of a spherical magnet using iron needles.
Noting that the resulting field lines crossed at two points he named those points'poles' in analogy to Earth's poles. He clearly articulated the principle that magnets always have both a north and south pole, no matter how finely one slices them. Three centuries William Gilbert of Colchester replicated Petrus Peregrinus' work and was the first to state explicitly that Earth is a magnet. Published in 1600, Gilbert's work, De Magnete, helped to establish magnetism as a science. In 1750, John Michell stated that magnetic poles attract and repel in accordance with an inverse square law. Charles-Augustin de Coulomb experimentally verified this in 1785 and stated explicitly that the north and south poles cannot be separated. Building on this force between poles, Siméon Denis Poisson created the first successful model of the magnetic field, which he presented in 1824. In this model, a magnetic H-field is produced by'magnetic poles' and magnetism is due to small pairs of north/south magnetic poles. Three discoveries in 1820 challenged this foundation of magnetism, though.
Hans Christian Ørsted demonstrated that a current-carrying wire is surrounded by a circular magnetic field. André-Marie Ampère showed that parallel wires with currents attract one another if the currents are in the same direction and repel if they are in opposite directions. Jean-Baptiste Biot and Félix Savart announced empirical results about the forces that a current-carrying long, straight wire exerted on a small magnet, determining that the forces were inversely proportional to the perpendicular distance from the wire to the magnet. Laplace deduced, but did not publish, a law of force based on the differential action of a differential section of the wire, which became known as the Biot–Savart law. Extending these experiments, Ampère published his own successful model of magnetism in 1825. In it, he showed the equivalence of electrical currents to magnets and proposed that magnetism is due to perpetually flowing loops of current instead of the dipoles of magnetic charge in Poisson's model.
This has the additional benefit of explaining. Further, Ampère derived both Ampère's force law describing the force between two currents and Ampère's law, like the Biot–Savart law described the magnetic field generated by a steady current. In this work, Ampère introduced the term electrodynamics to describe the relationship between electricity and magnetism. In 1831, Michael Faraday discovered electromagnetic induction when he found that a changing magnetic field generates an encircling electric field, he described this phenomenon in. Franz Ernst Neumann proved that, for a moving conductor in a magnetic field, induction is a consequence of Ampère's force law. In the process, he introduced the magnetic vector potential, shown to be equivalent to the underlying mechanism proposed by Faraday. In 1850, Lord Kelvin known as William Thomson, distinguished between two magnetic fields now denoted H and B; the former applied to the latter to Ampère's model and induction. Further, he derived how H and B relate to each other
Hall effect sensor
A Hall effect sensor is a device, used to measure the magnitude of a magnetic field. Its output voltage is directly proportional to the magnetic field strength through it. Hall effect sensors are used for proximity sensing, speed detection, current sensing applications. A Hall sensor is combined with threshold detection so that it acts as and is called a switch. Seen in industrial applications such as the pictured pneumatic cylinder, they are used in consumer equipment, they can be used in computer keyboards, an application that requires ultra-high reliability. Hall sensors are used to time the speed of wheels and shafts, such as for internal combustion engine ignition timing and anti-lock braking systems, they are used in brushless DC electric motors to detect the position of the permanent magnet. In the pictured wheel with two spaced magnets, the voltage from the sensor will peak twice for each revolution; this arrangement is used to regulate the speed of disk drives. A Hall probe contains an indium compound semiconductor crystal such as indium antimonide, mounted on an aluminum backing plate, encapsulated in the probe head.
The plane of the crystal is perpendicular to the probe handle. Connecting leads from the crystal is brought down through the handle to the circuit box; when the Hall probe is held so that the magnetic field lines are passing at right angles through the sensor of the probe, the meter gives a reading of the value of magnetic flux density. A current is passed through the crystal which, when placed in a magnetic field has a "Hall effect" voltage developed across it; the Hall effect is seen. The natural electron drift of the charge carriers causes the magnetic field to apply a Lorentz force to these charge carriers; the result is what is seen as charge separation, with a buildup of either positive or negative charges on the bottom or on the top of the plate. The crystal measures 5 mm square; the probe handle, being made of a non-ferrous material, has no disturbing effect on the field. A Hall probe should be calibrated against a known value of magnetic field strength. For a solenoid the Hall probe is placed in the center.
In a Hall effect sensor, a thin strip of metal has a current applied along it. In the presence of a magnetic field, the electrons in the metal strip are deflected toward one edge, producing a voltage gradient across the short side of the strip. Hall effect sensors have an advantage over inductive sensors in that, while inductive sensors respond to a changing magnetic field which induces current in a coil of wire and produces voltage at its output, Hall effect sensors can detect static magnetic fields. In its simplest form, the sensor operates as an analog transducer, directly returning a voltage. With a known magnetic field, its distance from the Hall plate can be determined. Using groups of sensors, the relative position of the magnet can be deduced; when a beam of charged particles passes through a magnetic field, forces act on the particles and the beam is deflected from a straight path. The flow of electrons through a conductor form a beam of charged carriers; when an conductor is placed in a magnetic field perpendicular to the direction of the electrons, they will be deflected from a straight path.
As a consequence, one plane of the conductor will become negatively charged and the opposite side will become positively charged. The voltage between these planes is called the Hall voltage; when the force on the charged particles from the electric field balances the force produced by magnetic field, the separation of them will stop. If the current is not changing the Hall voltage is a measure of the magnetic flux density. There are two kinds of Hall effect sensors. One is linear; the key factor determining sensitivity of Hall effect sensors is high electron mobility. As a result, the following materials are suitable for Hall effect sensors: gallium arsenide indium arsenide indium phosphide indium antimonide graphene Hall effect sensors are linear transducers; as a result, such sensors require a linear circuit for processing of the sensor's output signal. Such a linear circuit: provides a constant driving current to the sensors amplifies the output signalIn some cases the linear circuit may cancel the offset voltage of Hall effect sensors.
Moreover, AC modulation of the driving current may reduce the influence of this offset voltage. Hall effect sensors with linear transducers are integrated with digital electronics; this enables advanced corrections to the sensor's characteristics and digital interfacing to microprocessor systems. In some solutions of IC Hall effect sensors a DSP is used, which provides for more choices among processing techniques; the Hall effect sensor interfaces may include input diagnostics, fault protection for transient conditions, short/open circuit detection. It may provide and monitor the current to the Hall effect sensor itself. There are precision IC products available to handle these features. A Hall effect sensor may operate as an electronic switch; such a switch is much more reliable. It can be operated at higher frequencies than a mechanical switch, it does not suffer from contact bounce because a solid state
Machining is any of various processes in which a piece of raw material is cut into a desired final shape and size by a controlled material-removal process. The processes that have this common theme, controlled material removal, are today collectively known as subtractive manufacturing, in distinction from processes of controlled material addition, which are known as additive manufacturing. What the "controlled" part of the definition implies can vary, but it always implies the use of machine tools. Machining is a part of the manufacture of many metal products, but it can be used on materials such as wood, plastic and composites. A person who specializes in machining is called a machinist. A room, building, or company where machining is done is called a machine shop. Much of modern-day machining is carried out by computer numerical control, in which computers are used to control the movement and operation of the mills and other cutting machines; the precise meaning of the term machining has evolved over the past one and a half centuries as technology has advanced.
In the 18th century, the word machinist meant a person who built or repaired machines. This person's work was done by hand, using processes such as the carving of wood and the hand-forging and hand-filing of metal. At the time and builders of new kinds of engines, such as James Watt or John Wilkinson, would fit the definition; the noun machine tool and the verb to machine did not yet exist. Around the middle of the 19th century, the latter words were coined as the concepts that they described evolved into widespread existence. Therefore, during the Machine Age, machining referred to the "traditional" machining processes, such as turning, drilling, broaching, shaping, planing and tapping. In these "traditional" or "conventional" machining processes, machine tools, such as lathes, milling machines, drill presses, or others, are used with a sharp cutting tool to remove material to achieve a desired geometry. Since the advent of new technologies in the post–World War II era, such as electrical discharge machining, electrochemical machining, electron beam machining, photochemical machining, ultrasonic machining, the retronym "conventional machining" can be used to differentiate those classic technologies from the newer ones.
In current usage, the term "machining" without qualification implies the traditional machining processes. In the decades of the 2000s and 2010s, as additive manufacturing evolved beyond its earlier laboratory and rapid prototyping contexts and began to become common throughout all phases of manufacturing, the term subtractive manufacturing became common retronymously in logical contrast with AM, covering any removal processes previously covered by the term machining; the two terms are synonymous, although the long-established usage of the term machining continues. This is comparable to the idea that the verb sense of contact evolved because of the proliferation of ways to contact someone but did not replace the earlier terms such as call, talk to, or write to; the three principal machining processes are classified as turning and milling. Other operations falling into miscellaneous categories include shaping, boring and sawing. Turning operations are operations that rotate the workpiece as the primary method of moving metal against the cutting tool.
Lathes are the principal machine tool used in turning. Milling operations are operations in which the cutting tool rotates to bring cutting edges to bear against the workpiece. Milling machines are the principal machine tool used in milling. Drilling operations are operations in which holes are produced or refined by bringing a rotating cutter with cutting edges at the lower extremity into contact with the workpiece. Drilling operations are done in drill presses but sometimes on lathes or mills. Miscellaneous operations are operations that speaking may not be machining operations in that they may not be swarf producing operations but these operations are performed at a typical machine tool. Burnishing is an example of a miscellaneous operation. Burnishing can be performed at a lathe, mill, or drill press. An unfinished workpiece requiring machining will need to have some material cut away to create a finished product. A finished product would be a workpiece that meets the specifications set out for that workpiece by engineering drawings or blueprints.
For example, a workpiece may be required to have a specific outside diameter. A lathe is a machine tool that can be used to create that diameter by rotating a metal workpiece, so that a cutting tool can cut metal away, creating a smooth, round surface matching the required diameter and surface finish. A drill can be used to remove metal in the shape of a cylindrical hole. Other tools that may be used for various types of metal removal are milling machines and grinding machines. Many of these same techniques are used in woodworking. More recent, advanced machining techniques include precision CNC machining, electrical discharge machining, electro-chemical erosion, laser cutting, or water jet cutting to shape metal workpieces; as a commercial venture, machining is performed in a machine shop, which consists of one or more workrooms containing major machine tools. Although a machine shop can be a stand-alone operation, many businesses maintain internal machine shops which support specialized needs of the business.
Machining requires attention to many details for a workpiece to meet the specificati