Reliability engineering is a sub-discipline of systems engineering that emphasizes dependability in the lifecycle management of a product. Dependability, or reliability, describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is related to availability, described as the ability of a component or system to function at a specified moment or interval of time. Reliability is theoretically defined as the probability of success. Testability and maintenance are defined as a part of "reliability engineering" in reliability programs. Reliability plays a key role in the cost-effectiveness of systems. Reliability engineering deals with the estimation and management of high levels of "lifetime" engineering uncertainty and risks of failure. Although stochastic parameters define and affect reliability, reliability is not achieved by mathematics and statistics. One cannot find a root cause by only looking at statistics. "Nearly all teaching and literature on the subject emphasize these aspects, ignore the reality that the ranges of uncertainty involved invalidate quantitative methods for prediction and measurement."
For example, it is easy to represent "probability of failure" as a symbol or value in an equation, but it is impossible to predict its true magnitude in practice, massively multivariate, so having the equation for reliability does not begin to equal having an accurate predictive measurement of reliability. Reliability engineering relates to safety engineering and to system safety, in that they use common methods for their analysis and may require input from each other. Reliability engineering focuses on costs of failure caused by system downtime, cost of spares, repair equipment and cost of warranty claims. Safety engineering focuses more on preserving life and nature than on cost, therefore deals only with dangerous system-failure modes. High reliability levels result from good engineering and from attention to detail, never from only reactive failure management; the word reliability can be traced back to 1816, is first attested to the poet Samuel Taylor Coleridge. Before World War II the term was linked to repeatability.
In the 1920s, product improvement through the use of statistical process control was promoted by Dr. Walter A. Shewhart at Bell Labs, around the time that Waloddi Weibull was working on statistical models for fatigue; the development of reliability engineering was here on a parallel path with quality. The modern use of the word reliability was defined by the U. S. military in the 1940s, characterizing a product that would operate when expected and for a specified period of time. In World War II, many reliability issues were due to the inherent unreliability of electronic equipment available at the time, to fatigue issues. In 1945, M. A. Miner published the seminal paper titled "Cumulative Damage in Fatigue" in an ASME journal. A main application for reliability engineering in the military was for the vacuum tube as used in radar systems and other electronics, for which reliability proved to be problematic and costly; the IEEE formed the Reliability Society in 1948. In 1950, the United States Department of Defense formed group called the "Advisory Group on the Reliability of Electronic Equipment" to investigate reliability methods for military equipment.
This group recommended three main ways of working: Improve component reliability. Establish quality and reliability requirements for suppliers. Collect field data and find root causes of failures. In the 1960s, more emphasis was given to reliability testing on system level; the famous military standard 781 was created at that time. Around this period the much-used military handbook 217 was published by RCA and was used for the prediction of failure rates of components; the emphasis on component reliability and empirical research alone decreased. More pragmatic approaches, as used in the consumer industries, were being used. In the 1980s, televisions were made up of solid-state semiconductors. Automobiles increased their use of semiconductors with a variety of microcomputers under the hood and in the dash. Large air conditioning systems developed electronic controllers, as had microwave ovens and a variety of other appliances. Communications systems began to adopt electronics to replace older mechanical switching systems.
Bellcore issued the first consumer prediction methodology for telecommunications, SAE developed a similar document SAE870050 for automotive applications. The nature of predictions evolved during the decade, it became apparent that die complexity wasn't the only factor that determined failure rates for integrated circuits. Kam Wong published a paper questioning the bathtub curve—see reliability-centered maintenance. During this decade, the failure rate of many components dropped by a factor of 10. Software became important to the reliability of systems. By the 1990s, the pace of IC development was picking up. Wider use of stand-alone microcomputers was common, the PC market helped keep IC densities follow
Pigging in the context of pipelines refers to the practice of using devices known as pigs or scrapers to perform various maintenance operations. This is done without stopping the flow of the product in the pipeline; these operations are not limited to cleaning and inspecting the pipeline. This is accomplished by inserting the pig into a "pig launcher" — an oversized section in the pipeline, reducing to the normal diameter; the launching station is closed and the pressure-driven flow of the product in the pipeline is used to push the pig along down the pipe until it reaches the receiving trap — the "pig catcher". Pigging has been used for many years to clean large diameter pipelines in the oil industry. Today, the use of smaller diameter pigging systems is now increasing in many continuous and batch process plants as plant operators search for increased efficiencies and reduced costs. Pigging can be used for any section of the transfer process between, for example, storage or filling systems. Pigging systems are installed in industries handling products as diverse as lubricating oils, chemicals, toiletries and foodstuffs.
Pigs are used in lube oil or paint blending to clean the pipes to avoid cross-contamination, to empty the pipes into the product tanks. Pigging is done at the beginning and at the end of each batch, but sometimes it is done in the midst of a batch, such as when producing a premix that will be used as an intermediate component. Pigs are used in oil and gas pipelines to clean the pipes. There are "smart pigs" used to inspect pipelines for the purpose of preventing leaks, which can be explosive and dangerous to the environment, they do not interrupt production, though some product can be lost when the pig is extracted. They can be used to separate different products in a multiproduct pipeline. If the pipeline contains butterfly valves, or reduced port ball valves, the pipeline cannot be pigged. Full port ball valves cause no problems because the inside diameter of the ball opening is the same as that of the pipe; some early cleaning "pigs" were made from straw bales wrapped in barbed wire while others used leather.
Both made a squealing noise while traveling through the pipe, sounding to some like a pig squealing, which gave pigs their name. "PIG" is sometimes claimed as an acronym or backronym derived from the initial letters of the term "Pipeline Inspection Gauge" or "Pipeline Intervention Gadget". A major advantage for multi-product pipelines of piggable systems is the potential of product savings. At the end of each product transfer, it is possible to clear out the entire line contents with the pig, either forwards to the receipt point, or backwards to the source tank. There is no requirement for extensive line flushing. Without the need for line flushing, pigging offers the additional advantage of much more rapid and reliable product changeover. Product sampling at the receipt point is faster with pigs, because the interface between products is clear. Pigging can be operated by a programmable logic controller. Pigging has a significant role to play in reducing the environmental impact of batch operations.
Traditionally, the only way that an operator of a batch process could ensure a product was cleared from a line was to flush the line with a cleaning agent such as water or a solvent, or with the next product. The cleaning agent had to be subjected to effluent treatment or solvent recovery. If a product was used to clear the line, it was necessary to downgrade or dump the contaminated portion of the product. All of these problems can now be eliminated due to the precise interface produced by modern pigging systems. Pigging systems are designed so that the pig is loaded into the launcher, pressured to launch the pig into the pipeline through a kicker line. In some cases, the pig is removed from the pipeline via the receiver at the end of each run. All systems must allow for the receipt of pigs at the launcher, as blockages in the pipeline may require the pigs to be pushed back to the launcher. Many systems are designed to pig the pipeline in either direction; the pig is pushed either with a liquid.
The pigs must be removed, as many pigs are rented, pigs wear and must be replaced, cleaning pigs push contaminants from the pipeline such as wax, foreign objects, etc. which must be removed from the pipeline. There are inherent risks in opening the barrel to atmospheric pressure so care must be taken to ensure that the barrel is depressurized prior to opening. If the barrel is not depressurized, the pig can be ejected from the barrel and operators have been injured when standing in front of an open pig door. A pig was once accidentally shot out of the end of a pipeline without a proper pig receiver and went through the side of a house 500 feet away; when the product is sour, the barrel should be evacuated to a flare system where the sour gas is burnt. Operators should wear a self-contained breathing apparatus. A few pigging systems utilize a "captive pig", the pipeline is only opened to check the condition of the pig. At all other times, the pig is shuttled up and down the pipeline at the end of each transfer, the pipeline is never opened up during process operation.
These systems are not common. There are many reports of incidents in which opera
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, other tasks; as an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. The concept of algorithm has existed for centuries. Greek mathematicians used algorithms in the sieve of Eratosthenes for finding prime numbers, the Euclidean algorithm for finding the greatest common divisor of two numbers; the word algorithm itself is derived from the 9th century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized Algoritmi.
A partial formalization of what would become the modern concept of algorithm began with attempts to solve the Entscheidungsproblem posed by David Hilbert in 1928. Formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, Alan Turing's Turing machines of 1936–37 and 1939. The word'algorithm' has its roots in Latinizing the name of Muhammad ibn Musa al-Khwarizmi in a first step to algorismus. Al-Khwārizmī was a Persian mathematician, astronomer and scholar in the House of Wisdom in Baghdad, whose name means'the native of Khwarazm', a region, part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, translated into Latin during the 12th century under the title Algoritmi de numero Indorum; this title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.
Al-Khwarizmi was the most read mathematician in Europe in the late Middle Ages through another of his books, the Algebra. In late medieval Latin, English'algorism', the corruption of his name meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός'number', the Latin word was altered to algorithmus, the corresponding English term'algorithm' is first attested in the 17th century. In English, it was first used in about 1230 and by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu, it begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five; the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that defines a sequence of operations". Which would include all computer programs, including programs that do not perform numeric calculations. A program is only an algorithm if it stops eventually. A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers. Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† to list all members of an enumerably infinite set by writing out their names, one after another, in some notation, but humans can do something useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human, capable of carrying out only elementary operations on symbols.
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large, thus an algorithm can be an algebraic equation such as y = m + n – two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of: Precise instructions for a fast, efficient, "good" process that specifies the "moves" of "the computer" to find and process arbitrary input integers/symbols m and n, symbols + and =... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format
The Hall effect is the production of a voltage difference across an electrical conductor, transverse to an electric current in the conductor and to an applied magnetic field perpendicular to the current. It was discovered by Edwin Hall in 1879. For clarity, the original effect is sometimes called the ordinary Hall effect to distinguish it from other "Hall effects" which have different physical mechanisms; the Hall coefficient is defined as the ratio of the induced electric field to the product of the current density and the applied magnetic field. It is a characteristic of the material from which the conductor is made, since its value depends on the type and properties of the charge carriers that constitute the current; the Hall effect was discovered in 1879 by Edwin Hall while he was working on his doctoral degree at Johns Hopkins University in Baltimore, Maryland. Eighteen years before the electron was discovered, his measurements of the tiny effect produced in the apparatus he used were an experimental tour de force, published under the name "On a New Action of the Magnet on Electric Currents".
The Hall effect is due to the nature of the current in a conductor. Current consists of the movement of many small charge carriers electrons, ions or all three; when a magnetic field is present, these charges experience a force, called the Lorentz force. When such a magnetic field is absent, the charges follow straight,'line of sight' paths between collisions with impurities, etc. However, when a magnetic field with a perpendicular component is applied, their paths between collisions are curved, thus moving charges accumulate on one face of the material; this leaves equal and opposite charges exposed on the other face, where there is a scarcity of mobile charges. The result is an asymmetric distribution of charge density across the Hall element, arising from a force, perpendicular to both the'line of sight' path and the applied magnetic field; the separation of charge establishes an electric field that opposes the migration of further charge, so a steady electric potential is established for as long as the charge is flowing.
In classical electromagnetism electrons move in the opposite direction of the current I. In some semiconductors it appears "holes" are flowing because the direction of the voltage is opposite to the derivation below. For a simple metal where there is only one type of charge carrier, the Hall voltage VH can be derived by using the Lorentz force and seeing that, in the steady-state condition, charges are not moving in the y-axis direction. Thus, the magnetic force on each electron in the y-axis direction is cancelled by a y-axis electrical force due to the buildup of charges; the vx term is the drift velocity of the current, assumed at this point to be holes by convention. The vxBz term is negative in the y-axis direction by the right hand rule. F = F = 0, so 0 = Ey − vxBz, where Ey is assigned in the direction of the y-axis. In wires, electrons instead of holes are flowing, so vx → −vx and q → −q. Ey = −VH/w. Substituting these changes gives V H = v x B z w The conventional "hole" current is in the negative direction of the electron current and the negative of the electrical charge which gives Ix = ntw where n is charge carrier density, tw is the cross-sectional area, −e is the charge of each electron.
Solving for w and plugging into the above gives the Hall voltage: V H = I x B z n t e If the charge build up had been positive the VH assigned in the image would have been negative. The Hall coefficient is defined as R H = E y j x B z where j is the current density of the carrier electrons, Ey is the induced electric field. In SI units, this becomes R H = E y j x B = V H t I B = − 1 n e; as a result, the Hall effect is useful as a means to measure either the carrier density or the magnetic field. One important feature of the Hall effect is that it differentiates between positive charges moving in one direction and negative charges moving in the opposite; the Hall effect offered the first real proof that electric currents in metals are carried by moving electrons, not by protons. The Hall effect showed that in some substances (especially p-type semi
Nondestructive testing is a wide group of analysis techniques used in science and technology industry to evaluate the properties of a material, component or system without causing damage. The terms nondestructive examination, nondestructive inspection, nondestructive evaluation are commonly used to describe this technology; because NDT does not permanently alter the article being inspected, it is a valuable technique that can save both money and time in product evaluation and research. The six most used NDT methods are eddy-current, magnetic-particle, liquid penetrant, radiographic and visual testing. NDT is used in forensic engineering, mechanical engineering, petroleum engineering, electrical engineering, civil engineering, systems engineering, aeronautical engineering and art. Innovations in the field of nondestructive testing have had a profound impact on medical imaging, including on echocardiography, medical ultrasonography, digital radiography. Various national and international trade associations exist to promote the industry, knowledge about non-destructive testing, to develop standard methods and training.
These include the American Society for Nondestructive Testing, the Non-Destructive Testing Management Association, the International Committee for Non-Destructive Testing, the European Federation for Non-Destructive Testing and the British Institute of Non-Destructive Testing. NDT methods rely upon use of electromagnetic radiation and other signal conversions to examine a wide variety of articles for integrity, composition, or condition with no alteration of the article undergoing examination. Visual inspection, the most applied NDT method, is quite enhanced by the use of magnification, cameras, or other optical arrangements for direct or remote viewing; the internal structure of a sample can be examined for a volumetric inspection with penetrating radiation, such as X-rays, neutrons or gamma radiation. Sound waves are utilized in the case of ultrasonic testing, another volumetric NDT method – the mechanical signal being reflected by conditions in the test article and evaluated for amplitude and distance from the search unit.
Another used NDT method used on ferrous materials involves the application of fine iron particles that are applied to a part while it is magnetized, either continually or residually. The particles will be attracted to leakage fields of magnetism on or in the test object, form indications on the object's surface, which are evaluated visually. Contrast and probability of detection for a visual examination by the unaided eye is enhanced by using liquids to penetrate the test article surface, allowing for visualization of flaws or other surface conditions; this method involves using dyes, fluorescent or colored, suspended in fluids and is used for non-magnetic materials metals. Analyzing and documenting a nondestructive failure mode can be accomplished using a high-speed camera recording continuously until the failure is detected. Detecting the failure can be accomplished using a sound detector or stress gauge which produces a signal to trigger the high-speed camera; these high-speed cameras have advanced recording modes to capture some non-destructive failures.
After the failure the high-speed camera will stop recording. The capture images can be played back in slow motion showing what happen before and after the nondestructive event, image by image. NDT is used in a variety of settings that covers a wide range of industrial activity, with new NDT methods and applications, being continuously developed. Nondestructive testing methods are applied in industries where a failure of a component would cause significant hazard or economic loss, such as in transportation, pressure vessels, building structures and hoisting equipment. In manufacturing, welds are used to join two or more metal parts; because these connections may encounter loads and fatigue during product lifetime, there is a chance that they may fail if not created to proper specification. For example, the base metal must reach a certain temperature during the welding process, must cool at a specific rate, must be welded with compatible materials or the joint may not be strong enough to hold the parts together, or cracks may form in the weld causing it to fail.
The typical welding defects could cause a pipeline to rupture. Welds may be tested using NDT techniques such as industrial radiography or industrial CT scanning using X-rays or gamma rays, ultrasonic testing, liquid penetrant testing, magnetic particle inspection or via eddy current. In a proper weld, these tests would indicate a lack of cracks in the radiograph, show clear passage of sound through the weld and back, or indicate a clear surface without penetrant captured in cracks. Welding techniques may be monitored with acoustic emission techniques before production to design the best set of parameters to use to properly join two materials. In the case of high stress or safety critical welds, weld monitoring will be employed to confirm the specified welding parameters are being adhered to those stated in the welding procedure; this verifies the weld as correct to procedure prior to nondestructive evaluation and metallurgy tests. Stru
In archaeology, excavation is the exposure and recording of archaeological remains. An excavation site or "dig" is a site being studied; such a site excavation concerns itself with a specific archaeological site or a connected series of sites, may be conducted over as little as several weeks to over a number of years. Numerous specialized techniques each with its particular features are used. Resources and other practical issues do not allow archaeologists to carry out excavations whenever and wherever they choose; these constraints mean. This is with the intention of preserving them for future generations as well as recognising the role they serve in the communities that live near them. Excavation involves the recovery of several types of data from a site; these data include artifacts, ecofacts and, most archaeological context. Ideally, data from the excavation should suffice to reconstruct the site in three-dimensional space; the presence or absence of archaeological remains can be suggested by remote sensing, such as ground-penetrating radar.
Indeed, grosser information about the development of the site may be drawn from this work but the understanding of finer features requires excavation though appropriate use of augering. Excavation techniques have developed over the years from a treasure hunting process to one which seeks to understand the sequence of human activity on a given site and that site's relationship with other sites and with the landscape in which it is set; the history of excavation began with a crude search for treasure and for artifacts which fell into the category of'curio'. These curios were the subject of interest of antiquarians, it was appreciated that digging on a site destroyed the evidence of earlier people's lives which it had contained. Once the curio had been removed from its context, most of the information it held was lost, it was from this realization that antiquarianism began to be replaced by archaeology, a process still being perfected. Archaeological material tends to accumulate in events. A gardener laid a gravel path or planted a bush in a hole.
A builder back-filled the trench. Years someone built a pigsty onto it and drained the pigsty into the nettle patch. Still, the original wall blew over and so on; each event, which may have taken a short or long time to accomplish, leaves a context. This layer cake of events is referred to as the archaeological sequence or record, it is by analysis of this sequence or record that excavation is intended to permit interpretation, which should lead to discussion and understanding. The prominent processual archaeologist Lewis Binford highlighted the fact that the archaeological evidence left at a site may not be indicative of the historical events that took place there. Using an ethnoarchaeological comparison, he looked at how hunters amongst the Nunamiut Iñupiat of north central Alaska spent a great deal of time in a certain area waiting for prey to arrive there, that during this period, they undertook other tasks to pass the time, such as the carving of various objects, including a wooden mould for a mask, a horn spoon and an ivory needle, as well as repairing a skin pouch and a pair of caribou skin socks.
Binford notes that all of these activities would have left evidence in the archaeological record, but that none of them would provide evidence for the primary reason that the hunters were in the area. As he remarked, waiting for animals to hunt "represented 24% of the total man-hours of activity recorded. No tools left on the site were used, there were no immediate material "byproducts" of the "primary" activity. All of the other activities conducted at the site were boredom reducers." There are two basic types of modern archaeological excavation: Research excavation – when time and resources are available to excavate the site and at a leisurely pace. These are now exclusively the preserve of academics or private societies who can muster enough volunteer labour and funds; the size of the excavation can be decided by the director as it goes on. Development-led excavation – undertaken by professional archaeologists when the site is threatened by building development. Funded by the developer meaning that time is more of a factor as well as its being focused only on areas to be affected by building.
The workforce is more skilled however and pre-development excavations provide a comprehensive record of the areas investigated. Rescue archaeology is sometimes thought of as a separate type of excavation but in practice tends to be a similar form of development-led practice. Various new forms of excavation terminology have appeared in recent years such as Strip map and sample some of which have been criticized within the profession as jargon created to cover up for falling standards of practice. There are two main types of trial excavation in professional archaeology both associated with development-led excavation: the test pit or trench and the watching brief; the purpose of trial excavations is to determine the extent and characteristics of archaeological potential in a given area before extensive excavation work is undertaken. This is conducted in development-led excavations as part of Project management planning; the main difference between Trial
A trade secret is a formula, process, instrument, commercial method, or compilation of information not known or reasonably ascertainable by others by which a business can obtain an economic advantage over competitors or customers. In some jurisdictions, such secrets are referred to as confidential information; the precise language by which a trade secret is defined varies by jurisdiction, as do the particular types of information that are subject to trade secret protection. Three factors are common to all such definitions: A trade secret is information, not known to the public. In international law, these three factors define a trade secret under article 39 of the Agreement on Trade-Related Aspects of Intellectual Property Rights referred to as the TRIPS Agreement. In the United States Economic Espionage Act of 1996, "A trade secret, as defined under 18 U. S. C. § 1839, has three parts: information. Trade secrets are an invisible component of a company's intellectual property, their contribution to a company's value, measured as its market capitalization, can be major.
Being invisible, that contribution is hard to measure. Patents are a visible contribution, but delayed, unsuitable for internal innovations. Having an internal scoreboard provides insight into the cost of risks of employees leaving to serve or start competing ventures. In contrast to registered intellectual property, trade secrets are, by definition, not disclosed to the world at large. Instead, owners of trade secrets seek to protect trade secret information from competitors by instituting special procedures for handling it, as well as technological and legal security measures. Legal protections include non-disclosure agreements, work-for-hire and non-compete clauses. In other words, in exchange for an opportunity to be employed by the holder of secrets, an employee may sign agreements to not reveal their prospective employer's proprietary information, to surrender or assign to their employer ownership rights to intellectual work and work-products produced during the course of employment, to not work for a competitor for a given period of time.
Violation of the agreement carries the possibility of heavy financial penalties which operate as a disincentive to reveal trade secrets. However, proving a breach of an NDA by a former stakeholder, working for a competitor or prevailing in a lawsuit for breaching a non-compete clause can be difficult. A holder of a trade secret may require similar agreements from other parties he or she deals with, such as vendors and board members; as a company can protect its confidential information through NDA, work-for-hire, non-compete contracts with its stakeholders, these protective contractual measures create a perpetual monopoly on secret information that does not expire as would a patent or copyright. The lack of formal protection associated with registered intellectual property rights, means that a third party not bound by a signed agreement is not prevented from independently duplicating and using the secret information once it is discovered, such as through reverse engineering. Therefore, trade secrets such as secret formulae are protected by restricting the key information to a few trusted individuals.
Famous examples of products protected by trade secrets are Coca-Cola. Because protection of trade secrets can, in principle, extend indefinitely, it therefore may provide an advantage over patent protection and other registered intellectual property rights, which last only for a specific duration; the Coca-Cola company, for example, has no patent for the formula of Coca-Cola and has been effective in protecting it for many more years than the 20 years of protection that a patent would have provided. In fact, Coca-Cola refused to reveal its trade secret under at least two judges' orders. Companies try to discover one another's trade secrets through lawful methods of reverse engineering or employee poaching on one hand, unlawful methods including industrial espionage on the other. Acts of industrial espionage are illegal in their own right under the relevant governing laws, penalties can be harsh; the importance of that illegality to trade secret law is: if a trade secret is acquired by improper means the secret is deemed to have been misappropriated.
Thus, if a trade secret has been acquired via industrial espionage, its acquirer will be subject to legal liability for having acquired it improperly — this notwithstanding, the holder of the trade secret is obliged to protect against such espionage to some degree in order to safeguard the secret, as under most trade secret regimes, a trade secret is not deemed to exist unless its purported holder takes reasonable steps to maintain its secrecy. Commentators starting with A. Arthur Schiller assert that trade secrets were protected under Roman law by a claim known as actio servi corrupti, interpreted as an "action for making a slave worse"; the Roman law is described as follows: he Roman owner of a mark or firm name was protected against unfair usage by a