A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
A Controller Area Network is a robust vehicle bus standard designed to allow microcontrollers and devices to communicate with each other in applications without a host computer. It is a message-based protocol, designed for multiplex electrical wiring within automobiles to save on copper, but is used in many other contexts. Development of the CAN bus started in 1983 at Robert Bosch GmbH; the protocol was released in 1986 at the Society of Automotive Engineers conference in Detroit, Michigan. The first CAN controller chips, produced by Intel and Philips, came on the market in 1987. Released in 1991 the Mercedes-Benz W140 was the first production vehicle to feature a CAN-based multiplex wiring system. Bosch published several versions of the CAN specification and the latest is CAN 2.0 published in 1991. This specification has two parts. A CAN device that uses 11-bit identifiers is called CAN 2.0A and a CAN device that uses 29-bit identifiers is called CAN 2.0B. These standards are available from Bosch along with other specifications and white papers.
In 1993, the International Organization for Standardization released the CAN standard ISO 11898, restructured into two parts. ISO 11898-3 was released and covers the CAN physical layer for low-speed, fault-tolerant CAN; the physical layer standards ISO 11898-2 and ISO 11898-3 are not part of the Bosch CAN 2.0 specification. These standards may be purchased from the ISO. Bosch is still active in extending the CAN standards. In 2012, Bosch released CAN with Flexible Data-Rate; this specification uses a different frame format that allows a different data length as well as optionally switching to a faster bit rate after the arbitration is decided. CAN FD is compatible with existing CAN 2.0 networks so new CAN FD devices can coexist on the same network with existing CAN devices. CAN bus is one of five protocols used in the on-board diagnostics -II vehicle diagnostics standard; the OBD-II standard has been mandatory for all cars and light trucks sold in the United States since 1996. The EOBD standard has been mandatory for all petrol vehicles sold in the European Union since 2001 and all diesel vehicles since 2004.
Passenger vehicles, buses Electronic equipment for aviation and navigation Industrial automation and mechanical control Elevators, escalators Building automation Medical instruments and equipment The modern automobile may have as many as 70 electronic control units for various subsystems. The biggest processor is the engine control unit. Others are used for transmission, antilock braking/ABS, cruise control, electric power steering, audio systems, power windows, mirror adjustment and recharging systems for hybrid/electric cars, etc; some of these form independent subsystems. A subsystem may need to receive feedback from sensors; the CAN standard was devised to fill this need. One key advantage is that interconnection between different vehicle systems can allow a wide range of safety and convenience features to be implemented using software alone - functionality which would add cost and complexity if such features were "hard wired" using traditional automotive electrics. Examples include: Auto start/stop: Various sensor inputs from around the vehicle are collated via the CAN bus to determine whether the engine can be shut down when stationary for improved fuel economy and emissions.
Electric park brakes: The "hill hold" functionality takes input from the vehicle's tilt sensor and the road speed sensors via the CAN bus to determine if the vehicle is stopped on an incline. Inputs from seat belt sensors are fed from the CAN bus to determine if the seat belts are fastened, so that the parking brake will automatically release upon moving off. Parking assist systems: when the driver engages reverse gear, the transmission control unit can send a signal via the CAN bus to activate both the parking sensor system and the door control module for the passenger side door mirror to tilt downward to show the position of the curb; the CAN bus takes inputs from the rain sensor to trigger the rear windscreen wiper when reversing. Auto lane assist/collision avoidance systems: The inputs from the parking sensors are used by the CAN bus to feed outside proximity data to driver assist systems such as Lane Departure warning, more these signals travel through the CAN bus to actuate brake by wire in active collision avoidance systems.
Auto brake wiping: Input is taken from the rain sensor via the CAN bus to the ABS module to initiate an imperceptible application of the brakes whilst driving to clear moisture from the brake rotors. Some high performance Audi and BMW models incorporate this feature. Sensors can be placed at the most suitable place, its data used by several ECU. For example, outdoor temperature sensors can be placed in the outside mirrors, avoiding heating by the engine, data used by both the engine, the climate control and the driver display. In recent years, the LIN bus standard has been introduced to complement CAN for non-critical subsystems such as air-conditioning and infotainment, where data transmis
1-Wire is a device communications bus system designed by Dallas Semiconductor Corp. that provides low-speed data and power over a single conductor. 1-Wire is similar in concept with lower data rates and longer range. It is used to communicate with small inexpensive devices such as digital thermometers and weather instruments. A network of 1-Wire devices with an associated master device is called a MicroLAN. One distinctive feature of the bus is the possibility of using only two wires: ground. To accomplish this, 1-Wire devices include an 800 pF capacitor to store charge and power the device during periods when the data line is active. 1-Wire devices are available in different packages: integrated circuit, TO-92 and a portable form called an iButton. The iButton is a small stainless-steel package. Manufacturers produce devices more complex than a single component that use the 1-Wire bus to communicate. 1-Wire devices can fit in different places in a system. It might be one of many components on a circuit board within a product.
It might be a single component within a device such as a temperature probe. It could be attached to a device being monitored; some laboratory systems connect to 1-Wire devices using cables with modular connectors or CAT-5 cable. In such systems, RJ11 are popular. Systems of sensors and actuators can be built by wiring together many 1-Wire components; each 1-Wire component contains all of the logic needed to operate on the 1-Wire bus. Examples include temperature loggers, timers and current sensors, battery monitors, memory; these can be connected to a PC using a bus converter. USB, RS-232 serial, parallel port interfaces are popular solutions for connecting a MicroLan to the host PC. 1-Wire devices can be interfaced directly to microcontrollers from various vendors. IButtons are connected to 1-Wire bus systems by means of sockets with contacts that touch the "lid" and "base" of the canister. Alternatively, the connection can be semi-permanent with a socket into which the iButton clips, but from which it is removed.
The Java Ring is a ring-mounted iButton with a Java virtual machine, compatible with the Java Card 2.0 specification. These were given to attendees of the 1998 JavaOne conference; each 1-Wire chip has a unique identifier code. This feature makes the chips iButtons, suitable electronic keys; some uses include locks, burglar alarms, computer systems, manufacturer-approved accessories and time clocks. IButtons have been used as Akbil smart tickets for the public transport in Istanbul. An iButton's temperature data can be read by USB-OTG-Android smartphones hardware interface connection. Apple MagSafe and MagSafe 2 connector-equipped power supplies and Mac laptops use the 1-Wire protocol to send and receive data to and from the connected Mac laptop, via the middle pin of the connector. Data include power supply model and serial number. Genuine Dell laptop power supplies use the 1-Wire protocol to send data via the third wire to the laptop; the laptop will refuse charging if the adapter does not meet requirements.
In any MicroLan, there is always one master in overall charge, which may be a PC or a microcontroller. The master initiates activity on the bus. Protocols are built into the master's software to detect collisions. After a collision, the master retries the required communication. A 1-Wire network is a single open drain wire with a single pull-up resistor; the pull-up resistor pulls the wire up to 5 volts. The master device and all the slaves each have a single open-drain connection to drive the wire, a way to sense the state of the wire. Despite the "1-Wire" name, all devices must have a second wire, a ground connection to permit a return current to flow through the data wire. Communication occurs when a master or slave pulls the bus low, i.e. connects the pull up resistor to ground through its output MOSFET. The data wire is high when idle, so it can power a limited number of slave devices. Data rates of 16.3 kbit/s can be achieved. There is an overdrive mode which speeds up the communication by a factor of 10.
A short 1-wire bus can be driven from a single digital I/O pin on a microcontroller. A UART can be used. Specific 1-Wire driver and bridge chips are available. USB "bridge" chips are available. Bridge chips are useful to drive long cables. Up to 300 meter twisted pairs have been tested by the manufacturer; these extreme lengths require adjustments to the pull-up resistances from 5 to 1 kΩ. The master starts a transmission with a reset pulse, which pulls the wire to 0 volts for at least 480 µs; this resets every slave device on the bus. After that, any slave device, if present, shows that it exists with a "presence" pulse: it holds the bus low for at least 60 µs after the master releases the bus. To send a "1", the bus master sends a brief low pulse. To send a "0", the master sends a 60 µs low pulse; the falling edge of the pulse is used to start a monostable multivibrator in the slave device. The multivibrator in the slave reads the data line about 30 µs after the falling edge; the slave's internal timer is an inexpensive analog timer.
It has analog tolerances. Therefore, the pulses are calculated to be within margins; therefore the "0" pulses have to be 60 µs long, the "1" pulses can't be longer than 15 µs. When receiving data, the master
Highway Addressable Remote Transducer Protocol
The HART Communication Protocol is a hybrid analog+digital industrial automation open protocol. Its most notable advantage is that it can communicate over legacy 4–20 mA analog instrumentation current loops, sharing the pair of wires used by the analog only host systems. HART is used in process and instrumentation systems ranging from small automation applications up to the sophisticated industrial applications. According to Emerson, due to the huge installed base of 4–20 mA systems throughout the world, the HART Protocol is one of the most popular industrial protocols today. HART protocol has made a good transition protocol for users who wished to use the legacy 4–20 mA signals, but wanted to implement a "smart" protocol; the protocol was developed by Rosemount Inc. built off the Bell 202 early communications standard in the mid-1980s as a proprietary digital communication protocol for their smart field instruments. Soon it evolved into HART and in 1986 it was made an open protocol. Since the capabilities of the protocol have been enhanced by successive revisions to the specification.
There are two main operational modes of HART instruments: point-to-point mode, multi-drop mode. In point-to-point mode the digital signals are overlaid on the 4–20 mA loop current. Both the 4–20 mA current and the digital signal are valid signalling protocols between the controller and measuring instrument or final control element; the polling address of the instrument is set to "0". Only one instrument can be put on each instrument cable signal pair. One signal specified by the user, is specified to be the 4–20 mA signal. Other signals are sent digitally on top of the 4–20 mA signal. For example, pressure can be sent as 4–20 mA, representing a range of pressures, temperature can be sent digitally over the same wires. In point-to-point mode, the digital part of the HART protocol can be seen as a kind of digital current loop interface. In multi-drop mode the analog loop current is fixed at 4 mA and it is possible to have more than one instrument on a signal loop. HART revisions 3 through 5 allowed polling addresses of the instruments to be in the range 1–15.
HART revision 6 allowed addresses 1 to 63. Each instrument must have a unique address; the request HART packet has the following structure: Currently all the newer devices implement five byte preamble, since anything greater reduces the communication speed. However, masters are responsible for backwards support. Master communication to a new device starts with the maximum preamble length and is reduced once the preamble size for the current device is determined; this byte specifies the communication packet is starting. Specifies the destination address; the original addressing scheme used only four bits to specify the device address, which limited the number of devices to 16 including the master. The newer scheme utilizes 38 bits to specify the device address; this address is requested from the device using either Command 0, or Command 11. This is a one byte numerical value representing. Command 0 and Command 11 are used to request the device number. Specifies the number of communication data bytes to follow.
The status field is two bytes for the slave. This field is used by the slave to inform the master whether it completed the task and what its current health status is. Data contained in this field depends on the command to be executed. Checksum is composed of an XOR of all the bytes starting from the start byte and ending with the last byte of the data field, including those bytes; each manufacturer that participates in the HART convention is assigned an identification number. This number is communicated as part of the basic device identification command used when first connecting to a device. FieldComm Group. NET Open Source project
Industrial control system
Industrial control system is a general term that encompasses several types of control systems and associated instrumentation used for industrial process control. Such systems can range from a few modular panel-mounted controllers to large interconnected and interactive distributed control systems with many thousands of field connections. All systems receive data received from remote sensors measuring process variables, compare these with desired set points and derive command functions which are used to control a process through the final control elements, such as control valves; the larger systems are implemented by Supervisory Control and Data Acquisition systems, or distributed control systems, programmable logic controllers, though SCADA and PLC systems are scalable down to small systems with few control loops. Such systems are extensively used in industries such as chemical processing and paper manufacture, power generation and gas processing and telecommunications; the simplest control systems are based around small discrete controllers with a single control loop each.
These are panel mounted which allows direct viewing of the front panel and provides means of manual intervention by the operator, either to manually control the process or to change control setpoints. These would be pneumatic controllers, a few of which are still in use, but nearly all are now electronic. Quite complex systems can be created with networks of these controllers communicating using industry standard protocols. Networking allow the use of local or remote SCADA operator interfaces, enables the cascading and interlocking of controllers. However, as the number of control loops increase for a system design there is a point where the use of a programmable logic controller or distributed control system is more manageable or cost-effective. A distributed control system is a digital processor control system for a process or plant, wherein controller functions and field connection modules are distributed throughout the system; as the number of control loops grows, DCS becomes more cost effective than discrete controllers.
Additionally a DCS provides supervisory management over large industrial processes. In a DCS, a hierarchy of controllers is connected by communication networks, allowing centralised control rooms and local on-plant monitoring and control. A DCS enables easy configuration of plant controls such as cascaded loops and interlocks, easy interfacing with other computer systems such as production control, it enables more sophisticated alarm handling, introduces automatic event logging, removes the need for physical records such as chart recorders and allows the control equipment to be networked and thereby located locally to equipment being controlled to reduce cabling. A DCS uses custom-designed processors as controllers, uses either proprietary interconnections or standard protocols for communication. Input and output modules form the peripheral components of the system; the processors receive information from input modules, process the information and decide control actions to be performed by the output modules.
The input modules receive information from sensing instruments in the process and the output modules transmit instructions to the final control elements, such as control valves. The field inputs and outputs can either be continuously changing analog signals e.g. current loop or 2 state signals that switch either on or off, such as relay contacts or a semiconductor switch. Distributed control systems can also support Foundation Fieldbus, PROFIBUS, HART, Modbus and other digital communication buses that carry not only input and output signals but advanced messages such as error diagnostics and status signals. Supervisory control and data acquisition is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management; the operator interfaces which enable monitoring and the issuing of process commands, such as controller set point changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to other peripheral devices such as programmable logic controllers and discrete PID controllers which interface to the process plant or machinery.
The SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access through standard automation protocols. In practice, large SCADA systems have grown to become similar to distributed control systems in function, but using multiple means of interfacing with the plant, they can control large-scale processes that can include multiple sites, work over large distances. This is a commonly-used architecture industrial control systems, however there are concerns about SCADA systems being vulnerable to cyberwarfare or cyberterrorism attacks; the SCADA software operates on a supervisory level as control actions are performed automatically by RTUs or PLCs. SCADA control functions are restricted to basic overriding or supervisory level intervention. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow.
The SCADA enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded. PLCs can range from small modular devices with tens of inputs and outputs in a
Computing is any activity that uses computers. It includes developing hardware and software, using computers to manage and process information and entertain. Computing is a critically important, integral component of modern industrial technology. Major computing disciplines include computer engineering, software engineering, computer science, information systems, information technology; the ACM Computing Curricula 2005 defined "computing" as follows: "In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; the list is endless, the possibilities are vast." and it defines five sub-disciplines of the computing field: computer science, computer engineering, information systems, information technology, software engineering. However, Computing Curricula 2005 recognizes that the meaning of "computing" depends on the context: Computing has other meanings that are more specific, based on the context in which the term is used.
For example, an information systems specialist will view computing somewhat differently from a software engineer. Regardless of the context, doing computing well can be complicated and difficult; because society needs people to do computing well, we must think of computing not only as a profession but as a discipline. The term "computing" has sometimes been narrowly defined, as in a 1989 ACM report on Computing as a Discipline: The discipline of computing is the systematic study of algorithmic processes that describe and transform information: their theory, design, efficiency and application; the fundamental question underlying all computing is "What can be automated?" The term "computing" is synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, before that, to human computers; the history of computing is longer than the history of computing hardware and modern computing technology and includes the history of methods intended for pen and paper or for chalk and slate, with or without the aid of tables.
Computing is intimately tied to the representation of numbers. But long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization; these concepts include one-to-one correspondence, comparison to a standard, the 3-4-5 right triangle. The earliest known tool for use in computation was the abacus, it was thought to have been invented in Babylon circa 2400 BC, its original style of usage was by lines drawn in sand with pebbles. Abaci, of a more modern design, are still used as calculation tools today; this was the first known calculation aid - preceding Greek methods by 2,000 years. The first recorded idea of using digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" introduced the idea of using electronics for Boolean algebraic operations. A computer is a machine that manipulates data according to a set of instructions called a computer program.
The program has an executable form. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm; because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the central processing unit type. The execution process carries out the instructions in a computer program. Instructions express, they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions. Computer software or just "software", is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures and its documentation concerned with the operation of a data processing system.
Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware. In contrast to hardware, software is intangible. Software is sometimes used in a more narrow sense, meaning application software only. Application software known as an "application" or an "app", is a computer software designed to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be published separately; some users need never install one. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but
MTConnect is a manufacturing technical standard to retrieve process information from numerically controlled machine tools. The initiative began as a result of lectures given by David Edstrom of Sun Microsystems and David Patterson, professor of Computer Science at the University of California, Berkeley at the 2006 annual meeting of the Association for Manufacturing Technology; the two lectures promoted an open communication standard to enable Internet connectivity to manufacturing equipment. Initial development was carried out by a joint effort between the UCB Electrical Engineering and Computer Sciences department, the UCB Mechanical Engineering department and the Georgia Institute of Technology, using input from industry representatives; the resulting standard is available under royalty-free licensing terms. MTConnect is a protocol designed for the exchange of data between shop floor equipment and software applications used for monitoring and data analysis. MTConnect is referred to as a read-only standard, meaning that it only defines the extraction of data from control devices, not the writing of data to a control device.
Available, open standards are used for all aspects of MTConnect. Data from shop floor devices is presented in XML format, is retrieved from information providers, called Agents, using Hypertext Transfer Protocol as the underlying transport protocol. MTConnect provides a RESTful interface. No session must be established to retrieve data from an MTConnect Agent, no logon or logoff sequence is required. Lightweight Directory Access Protocol is recommended for discovery services. Version 1.0 was released in December 2008. The first public demonstration of MTConnect occurred at the International Manufacturing Technology Show held in Chicago, Illinois September 2008. There, 25 industrial equipment manufacturers networked their machinery control systems, providing process information that could be retrieved from any web-enabled client connected to the network. Subsequent demonstrations occurred at EMO in Milan, Italy in October 2009, the 2010 IMTS in Chicago; the MTConnect standard has three sections.
The first section provides information on the protocol and structure of the XML documents via XML schemas. The second section specifies the description of the available data; the third and last section specifies the organization of the data streams that can be provided from a manufacturing device. The MTConnect Institute is considering adding a fourth section to support mobile assets that include tools and work-holdings. MTConnect took an incremental approach to defining the requirements for manufacturing device communications, it did not exhaustively define every possible piece of data an application can collect from a manufacturing device, but it works forward from business and research objectives to define the required elements to meet those needs. The standard catalogued important components and data items for metal cutting devices. MTConnect provides an extensible XML schema to allow implementors to add custom data to meet their specific needs, while providing as much commonality as possible.
On September 16, 2010, The MTConnect Institute and the OPC Foundation announced cooperation between the respective organizations. The maintenance cost and losses in productivity with unplanned downtime for machine tool components such as spindle bearings and ball screws could be reduced if one could proactively take action prior to failure. In addition, cutting tools and inserts are expensive to replace when they are still in good condition, but replacing the tools too late can be costly due to scrap and re-work; the proposed health monitoring application will use MTConnect to extract controller data and pattern recognition algorithms to assess the health condition of the spindle and machine tool axes. The health assessment approach is based on running a routine program each shift in which the most recent data patterns are compared to the baseline data patterns. An online tool condition monitoring module is proposed and uses controller data such as the spindle motor current, with other add on sensors to estimate and predict tool wear.
With the added transparency of the machine tool health information, one can take proactive actions before significant downtime or productivity losses occur. For the manufacturing industry, MTConnect fits in the architecture of cyber-physical systems for manufacturing; the 5 level architecture offers a framework to adopt all functionalities of CPS. Data acquisition and communication technologies such as MTConnect would be categorized into the smart connection level of CPS. MTConnect - What is it? MTConnect Institute Home Page Modern Machine Shop magazine article'MTConnect Is For Real' Control Design magazine article: MTConnect Standardizes Data, Lets Machines and Users Talk Same Language