Geometry is a branch of mathematics concerned with questions of shape, relative position of figures, the properties of space. A mathematician who works in the field of geometry is called a geometer. Geometry arose independently in a number of early cultures as a practical way for dealing with lengths and volumes. Geometry began to see elements of formal mathematical science emerging in the West as early as the 6th century BC. By the 3rd century BC, geometry was put into an axiomatic form by Euclid, whose treatment, Euclid's Elements, set a standard for many centuries to follow. Geometry arose independently in India, with texts providing rules for geometric constructions appearing as early as the 3rd century BC. Islamic scientists expanded on them during the Middle Ages. By the early 17th century, geometry had been put on a solid analytic footing by mathematicians such as René Descartes and Pierre de Fermat. Since and into modern times, geometry has expanded into non-Euclidean geometry and manifolds, describing spaces that lie beyond the normal range of human experience.
While geometry has evolved throughout the years, there are some general concepts that are more or less fundamental to geometry. These include the concepts of points, planes, surfaces and curves, as well as the more advanced notions of manifolds and topology or metric. Geometry has applications to many fields, including art, physics, as well as to other branches of mathematics. Contemporary geometry has many subfields: Euclidean geometry is geometry in its classical sense; the mandatory educational curriculum of the majority of nations includes the study of points, planes, triangles, similarity, solid figures and analytic geometry. Euclidean geometry has applications in computer science and various branches of modern mathematics. Differential geometry uses techniques of linear algebra to study problems in geometry, it has applications in physics, including in general relativity. Topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this means dealing with large-scale properties of spaces, such as connectedness and compactness.
Convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues using techniques of real analysis. It has close connections to convex analysis and functional analysis and important applications in number theory. Algebraic geometry studies geometry through the use of multivariate polynomials and other algebraic techniques, it has applications including cryptography and string theory. Discrete geometry is concerned with questions of relative position of simple geometric objects, such as points and circles, it shares many principles with combinatorics. Computational geometry deals with algorithms and their implementations for manipulating geometrical objects. Although being a young area of geometry, it has many applications in computer vision, image processing, computer-aided design, medical imaging, etc; the earliest recorded beginnings of geometry can be traced to ancient Mesopotamia and Egypt in the 2nd millennium BC. Early geometry was a collection of empirically discovered principles concerning lengths, angles and volumes, which were developed to meet some practical need in surveying, construction and various crafts.
The earliest known texts on geometry are the Egyptian Rhind Papyrus and Moscow Papyrus, the Babylonian clay tablets such as Plimpton 322. For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, or frustum. Clay tablets demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiter's position and motion within time-velocity space; these geometric procedures anticipated the Oxford Calculators, including the mean speed theorem, by 14 centuries. South of Egypt the ancient Nubians established a system of geometry including early versions of sun clocks. In the 7th century BC, the Greek mathematician Thales of Miletus used geometry to solve problems such as calculating the height of pyramids and the distance of ships from the shore, he is credited with the first use of deductive reasoning applied to geometry, by deriving four corollaries to Thales' Theorem. Pythagoras established the Pythagorean School, credited with the first proof of the Pythagorean theorem, though the statement of the theorem has a long history.
Eudoxus developed the method of exhaustion, which allowed the calculation of areas and volumes of curvilinear figures, as well as a theory of ratios that avoided the problem of incommensurable magnitudes, which enabled subsequent geometers to make significant advances. Around 300 BC, geometry was revolutionized by Euclid, whose Elements considered the most successful and influential textbook of all time, introduced mathematical rigor through the axiomatic method and is the earliest example of the format still used in mathematics today, that of definition, axiom and proof. Although most of the contents of the Elements were known, Euclid arranged them into a single, coherent logical framework; the Elements was known to all educated people in the West until the middle of the 20th century and its contents are still taught in geometry classes today. Archimedes of Syracuse used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, gave remarkably accurate approximations of Pi.
He studied the sp
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
Microsoft Windows is a group of several graphical operating system families, all of which are developed and sold by Microsoft. Each family caters to a certain sector of the computing industry. Active Windows families include Windows Embedded. Defunct Windows families include Windows Mobile and Windows Phone. Microsoft introduced an operating environment named Windows on November 20, 1985, as a graphical operating system shell for MS-DOS in response to the growing interest in graphical user interfaces. Microsoft Windows came to dominate the world's personal computer market with over 90% market share, overtaking Mac OS, introduced in 1984. Apple came to see Windows as an unfair encroachment on their innovation in GUI development as implemented on products such as the Lisa and Macintosh. On PCs, Windows is still the most popular operating system. However, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones.
In 2014, the number of Windows devices sold was less than 25 %. This comparison however may not be relevant, as the two operating systems traditionally target different platforms. Still, numbers for server use of Windows show one third market share, similar to that for end user use; as of October 2018, the most recent version of Windows for PCs, tablets and embedded devices is Windows 10. The most recent versions for server computers is Windows Server 2019. A specialized version of Windows runs on the Xbox One video game console. Microsoft, the developer of Windows, has registered several trademarks, each of which denote a family of Windows operating systems that target a specific sector of the computing industry; as of 2014, the following Windows families are being developed: Windows NT: Started as a family of operating systems with Windows NT 3.1, an operating system for server computers and workstations. It now consists of three operating system subfamilies that are released at the same time and share the same kernel: Windows: The operating system for mainstream personal computers and smartphones.
The latest version is Windows 10. The main competitor of this family is macOS by Apple for personal computers and Android for mobile devices. Windows Server: The operating system for server computers; the latest version is Windows Server 2019. Unlike its client sibling, it has adopted a strong naming scheme; the main competitor of this family is Linux. Windows PE: A lightweight version of its Windows sibling, meant to operate as a live operating system, used for installing Windows on bare-metal computers, recovery or troubleshooting purposes; the latest version is Windows PE 10. Windows IoT: Initially, Microsoft developed Windows CE as a general-purpose operating system for every device, too resource-limited to be called a full-fledged computer. However, Windows CE was renamed Windows Embedded Compact and was folded under Windows Compact trademark which consists of Windows Embedded Industry, Windows Embedded Professional, Windows Embedded Standard, Windows Embedded Handheld and Windows Embedded Automotive.
The following Windows families are no longer being developed: Windows 9x: An operating system that targeted consumers market. Discontinued because of suboptimal performance. Microsoft now caters to the consumer market with Windows NT. Windows Mobile: The predecessor to Windows Phone, it was a mobile phone operating system; the first version was called Pocket PC 2000. The last version is Windows Mobile 6.5. Windows Phone: An operating system sold only to manufacturers of smartphones; the first version was Windows Phone 7, followed by Windows Phone 8, the last version Windows Phone 8.1. It was succeeded by Windows 10 Mobile; the term Windows collectively describes any or all of several generations of Microsoft operating system products. These products are categorized as follows: The history of Windows dates back to 1981, when Microsoft started work on a program called "Interface Manager", it was announced in November 1983 under the name "Windows", but Windows 1.0 was not released until November 1985.
Windows 1.0 was to achieved little popularity. Windows 1.0 is not a complete operating system. The shell of Windows 1.0 is a program known as the MS-DOS Executive. Components included Calculator, Cardfile, Clipboard viewer, Control Panel, Paint, Reversi and Write. Windows 1.0 does not allow overlapping windows. Instead all windows are tiled. Only modal dialog boxes may appear over other windows. Microsoft sold as included Windows Development libraries with the C development environment, which included numerous windows samples. Windows 2.0 was released in December 1987, was more popular than its predecessor. It features several improvements to the user memory management. Windows 2.03 changed the OS from tiled windows to overlapping windows. The result of this change led to Apple Computer filing a suit against Microsoft alleging infringement on Apple's copyrights. Windows 2.0
Trigonometry is a branch of mathematics that studies relationships between side lengths and angles of triangles. The field emerged in the Hellenistic world during the 3rd century BC from applications of geometry to astronomical studies. In particular, 3rd-century astronomers first noted that the ratio of the lengths of two sides of a right-angled triangle depends only of one acute angles of the triangle; these dependencies are now called trigonometric functions. Trigonometry is the foundation of all applied geometry, including geodesy, celestial mechanics, solid mechanics, navigation. Trigonometric functions have been extended as functions of a real or complex variable, which are today pervasive in all mathematics. Sumerian astronomers studied angle measure. They, the Babylonians, studied the ratios of the sides of similar triangles and discovered some properties of these ratios but did not turn that into a systematic method for finding sides and angles of triangles; the ancient Nubians used a similar method.
In the 3rd century BC, Hellenistic mathematicians such as Euclid and Archimedes studied the properties of chords and inscribed angles in circles, they proved theorems that are equivalent to modern trigonometric formulae, although they presented them geometrically rather than algebraically. In 140 BC, Hipparchus gave the first tables of chords, analogous to modern tables of sine values, used them to solve problems in trigonometry and spherical trigonometry. In the 2nd century AD, the Greco-Egyptian astronomer Ptolemy constructed detailed trigonometric tables in Book 1, chapter 11 of his Almagest. Ptolemy used chord length to define his trigonometric functions, a minor difference from the sine convention we use today. Centuries passed before more detailed tables were produced, Ptolemy's treatise remained in use for performing trigonometric calculations in astronomy throughout the next 1200 years in the medieval Byzantine and Western European worlds; the modern sine convention is first attested in the Surya Siddhanta, its properties were further documented by the 5th century Indian mathematician and astronomer Aryabhata.
These Greek and Indian works were expanded by medieval Islamic mathematicians. By the 10th century, Islamic mathematicians were using all six trigonometric functions, had tabulated their values, were applying them to problems in spherical geometry; the Persian polymath Nasir al-Din al-Tusi has been described as the creator of trigonometry as a mathematical discipline in its own right. Nasīr al-Dīn al-Tūsī was the first to treat trigonometry as a mathematical discipline independent from astronomy, he developed spherical trigonometry into its present form, he listed the six distinct cases of a right-angled triangle in spherical trigonometry, in his On the Sector Figure, he stated the law of sines for plane and spherical triangles, discovered the law of tangents for spherical triangles, provided proofs for both these laws. Knowledge of trigonometric functions and methods reached Western Europe via Latin translations of Ptolemy's Greek Almagest as well as the works of Persian and Arab astronomers such as Al Battani and Nasir al-Din al-Tusi.
One of the earliest works on trigonometry by a northern European mathematician is De Triangulis by the 15th century German mathematician Regiomontanus, encouraged to write, provided with a copy of the Almagest, by the Byzantine Greek scholar cardinal Basilios Bessarion with whom he lived for several years. At the same time, another translation of the Almagest from Greek into Latin was completed by the Cretan George of Trebizond. Trigonometry was still so little known in 16th-century northern Europe that Nicolaus Copernicus devoted two chapters of De revolutionibus orbium coelestium to explain its basic concepts. Driven by the demands of navigation and the growing need for accurate maps of large geographic areas, trigonometry grew into a major branch of mathematics. Bartholomaeus Pitiscus was the first to use the word, publishing his Trigonometria in 1595. Gemma Frisius described for the first time the method of triangulation still used today in surveying, it was Leonhard Euler who incorporated complex numbers into trigonometry.
The works of the Scottish mathematicians James Gregory in the 17th century and Colin Maclaurin in the 18th century were influential in the development of trigonometric series. In the 18th century, Brook Taylor defined the general Taylor series. If one angle of a triangle is 90 degrees and one of the other angles is known, the third is thereby fixed, because the three angles of any triangle add up to 180 degrees; the two acute angles therefore add up to 90 degrees: they are complementary angles. The shape of a triangle is determined, except for similarity, by the angles. Once the angles are known, the ratios of the sides are determined, regardless of the overall size of the triangle. If the length of one of the sides is known, the other two are determined; these ratios are given by the following trigonometric functions of the known angle A, where a, b and c refer to the lengths of the sides in the accompanying figure: Sine function, defined as the ratio of the side opposite the angle to the hypotenuse.
Sin A = opposite hypotenuse = a c. Cosine funct
The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB; the unit prefix mega is a multiplier of 1000000 in the International System of Units. Therefore, one megabyte is one million bytes of information; this definition has been incorporated into the International System of Quantities. However, in the computer and information technology fields, several other definitions are used that arose for historical reasons of convenience. A common usage has been to designate one megabyte as 1048576bytes, a measurement that conveniently expresses the binary multiples inherent in digital computer memory architectures. However, most standards bodies have deprecated this usage in favor of a set of binary prefixes, in which this quantity is designated by the unit mebibyte. Less common is a convention that used the megabyte to mean 1000×1024 bytes; the megabyte is used to measure either 10002 bytes or 10242 bytes. The interpretation of using base 1024 originated as a compromise technical jargon for the byte multiples that needed to be expressed by the powers of 2 but lacked a convenient name.
As 1024 approximates 1000 corresponding to the SI prefix kilo-, it was a convenient term to denote the binary multiple. In 1998 the International Electrotechnical Commission proposed standards for binary prefixes requiring the use of megabyte to denote 10002 bytes and mebibyte to denote 10242 bytes. By the end of 2009, the IEC Standard had been adopted by the IEEE, EU, ISO and NIST; the term megabyte continues to be used with different meanings: Base 10 1 MB = 1000000 bytes is the definition recommended by the International System of Units and the International Electrotechnical Commission IEC. This definition is used in networking contexts and most storage media hard drives, flash-based storage, DVDs, is consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance; the Mac OS X 10.6 file manager is a notable example of this usage in software. Since Snow Leopard, file sizes are reported in decimal units. In this convention, one thousand megabytes is equal to one gigabyte, where 1 GB is one billion bytes.
Base 2 1 MB = 1048576 bytes is the definition used by Microsoft Windows in reference to computer memory, such as RAM. This definition is synonymous with the unambiguous binary prefix mebibyte. In this convention, one thousand and twenty-four megabytes is equal to one gigabyte, where 1 GB is 10243 bytes. Mixed 1 MB = 1024000 bytes is the definition used to describe the formatted capacity of the 1.44 MB 3.5-inch HD floppy disk, which has a capacity of 1474560bytes. Semiconductor memory doubles in size for each address lane added to an integrated circuit package, which favors counts that are powers of two; the capacity of a disk drive is the product of the sector size, number of sectors per track, number of tracks per side, the number of disk platters in the drive. Changes in any of these factors would not double the size. Sector sizes were set as powers of two for convenience in processing, it was a natural extension to give the capacity of a disk drive in multiples of the sector size, giving a mix of decimal and binary multiples when expressing total disk capacity.
Depending on compression methods and file format, a megabyte of data can be: a 1 megapixel bitmap image with 256 colors stored without any compression. A 4 megapixel JPEG image with normal compression. 1 minute of 128 kbit/s MP3 compressed music. 6 seconds of uncompressed CD audio. A typical English book volume in plain text format; the human genome consists of DNA representing 800 MB of data. The parts that differentiate one person from another can be compressed to 4 MB. Timeline of binary prefixes Gigabyte § Consumer confusion Historical Notes About The Cost Of Hard Drive Storage Space the megabyte International Electrotechnical Commission definitions IEC prefixes and symbols for binary multiples
MacOS is a series of graphical operating systems developed and marketed by Apple Inc. since 2001. It is the primary operating system for Apple's Mac family of computers. Within the market of desktop and home computers, by web usage, it is the second most used desktop OS, after Microsoft Windows.macOS is the second major series of Macintosh operating systems. The first is colloquially called the "classic" Mac OS, introduced in 1984, the final release of, Mac OS 9 in 1999; the first desktop version, Mac OS X 10.0, was released in March 2001, with its first update, 10.1, arriving that year. After this, Apple began naming its releases after big cats, which lasted until OS X 10.8 Mountain Lion. Since OS X 10.9 Mavericks, releases have been named after locations in California. Apple shortened the name to "OS X" in 2012 and changed it to "macOS" in 2016, adopting the nomenclature that they were using for their other operating systems, iOS, watchOS, tvOS; the latest version is macOS Mojave, publicly released in September 2018.
Between 1999 and 2009, Apple sold. The initial version, Mac OS X Server 1.0, was released in 1999 with a user interface similar to Mac OS 8.5. After this, new versions were introduced concurrently with the desktop version of Mac OS X. Beginning with Mac OS X 10.7 Lion, the server functions were made available as a separate package on the Mac App Store.macOS is based on technologies developed between 1985 and 1997 at NeXT, a company that Apple co-founder Steve Jobs created after leaving the company. The "X" in Mac OS X and OS X is pronounced as such; the X was a prominent part of the operating system's brand identity and marketing in its early years, but receded in prominence since the release of Snow Leopard in 2009. UNIX 03 certification was achieved for the Intel version of Mac OS X 10.5 Leopard and all releases from Mac OS X 10.6 Snow Leopard up to the current version have UNIX 03 certification. MacOS shares its Unix-based core, named Darwin, many of its frameworks with iOS, tvOS and watchOS.
A modified version of Mac OS X 10.4 Tiger was used for the first-generation Apple TV. Releases of Mac OS X from 1999 to 2005 ran on the PowerPC-based Macs of that period. After Apple announced that they were switching to Intel CPUs from 2006 onwards, versions were released for 32-bit and 64-bit Intel-based Macs. Versions from Mac OS X 10.7 Lion run on 64-bit Intel CPUs, in contrast to the ARM architecture used on iOS and watchOS devices, do not support PowerPC applications. The heritage of what would become macOS had originated at NeXT, a company founded by Steve Jobs following his departure from Apple in 1985. There, the Unix-like NeXTSTEP operating system was developed, launched in 1989; the kernel of NeXTSTEP is based upon the Mach kernel, developed at Carnegie Mellon University, with additional kernel layers and low-level user space code derived from parts of BSD. Its graphical user interface was built on top of an object-oriented GUI toolkit using the Objective-C programming language. Throughout the early 1990s, Apple had tried to create a "next-generation" OS to succeed its classic Mac OS through the Taligent and Gershwin projects, but all of them were abandoned.
This led Apple to purchase NeXT in 1996, allowing NeXTSTEP called OPENSTEP, to serve as the basis for Apple's next generation operating system. This purchase led to Steve Jobs returning to Apple as an interim, the permanent CEO, shepherding the transformation of the programmer-friendly OPENSTEP into a system that would be adopted by Apple's primary market of home users and creative professionals; the project was first code named "Rhapsody" and officially named Mac OS X. Mac OS X was presented as the tenth major version of Apple's operating system for Macintosh computers. Previous Macintosh operating systems were named using Arabic numerals, as with Mac OS 8 and Mac OS 9; the letter "X" in Mac OS X's name refers to a Roman numeral. It is therefore pronounced "ten" in this context. However, it is commonly pronounced like the letter "X"; the first version of Mac OS X, Mac OS X Server 1.0, was a transitional product, featuring an interface resembling the classic Mac OS, though it was not compatible with software designed for the older system.
Consumer releases of Mac OS X included more backward compatibility. Mac OS applications could be rewritten to run natively via the Carbon API; the consumer version of Mac OS X was launched in 2001 with Mac OS X 10.0. Reviews were variable, with extensive praise for its sophisticated, glossy Aqua interface but criticizing it for sluggish performance. With Apple's popularity at a low, the makers of several classic Mac applications such as FrameMaker and PageMaker declined to develop new versions of their software for Mac OS X. Ars Technica columnist John Siracusa, who reviewed every major OS X release up to 10.10, described the early releases in retrospect as'dog-slow, feature poor' and Aqua as'unbearably slow and a huge resource hog'. Apple developed several new releases of Mac OS X. Siracusa's review of version 10.3, noted "It's strange to have gone from years of uncertainty and vaporware to a steady annual supply of major new operating system releases." Version 10.4, Tiger shocked executives at Microsoft by offering a number of features, such as fast file s
Interoperability is a characteristic of a product or system, whose interfaces are understood, to work with other products or systems, at present or in the future, in either implementation or access, without any restrictions. While the term was defined for information technology or systems engineering services to allow for information exchange, a broader definition takes into account social and organizational factors that impact system to system performance. Task of building coherent services for users when the individual components are technically different and managed by different organizations If two or more systems are capable of communicating with each other, they exhibit syntactic interoperability when using specified data formats and communication protocols. XML or SQL standards are among the tools of syntactic interoperability; this is true for lower-level data formats, such as ensuring alphabetical characters are stored in a same variation of ASCII or a Unicode format in all the communicating systems.
Beyond the ability of two or more computer systems to exchange information, semantic interoperability is the ability to automatically interpret the information exchanged meaningfully and in order to produce useful results as defined by the end users of both systems. To achieve semantic interoperability, both sides must refer to a common information exchange reference model; the content of the information exchange requests are unambiguously defined: what is sent is the same as what is understood. The possibility of promoting this result by user-driven convergence of disparate interpretations of the same information has been object of study by research prototypes such as S3DB. Cross-domain interoperability involves multiple social, political, legal entities working together for a common interest and/or information exchange. Interoperability imply Open standards ab-initio, i.e. by definition. Interoperability imply exchanges between a range of products, or similar products from several different vendors, or between past and future revisions of the same product.
Interoperability may be developed post-facto, as a special measure between two products, while excluding the rest, by using Open standards. When a vendor is forced to adapt its system to a dominant system, not based on Open standards, it is not interoperability but only compatibility. Open standards rely on a broadly consultative and inclusive group including representatives from vendors and others holding a stake in the development that discusses and debates the technical and economic merits and feasibility of a proposed common protocol. After the doubts and reservations of all members are addressed, the resulting common document is endorsed as a common standard; this document is subsequently released to the public, henceforth becomes an open standard. It is published and is available or at a nominal cost to any and all comers, with no further encumbrances. Various vendors and individuals can use the standards document to make products that implement the common protocol defined in the standard, are thus interoperable by design, with no specific liability or advantage for any customer for choosing one product over another on the basis of standardised features.
The vendors' products compete on the quality of their implementation, user interface, ease of use, price, a host of other factors, while keeping the customers data intact and transferable if he chooses to switch to another competing product for business reasons. Post facto interoperability may be the result of the absolute market dominance of a particular product in contravention of any applicable standards, or if any effective standards were not present at the time of that product's introduction; the vendor behind that product can choose to ignore any forthcoming standards and not co-operate in any standardisation process at all, using its near-monopoly to insist that its product sets the de facto standard by its market dominance. This is not a problem if the product's implementation is open and minimally encumbered, but it may as well be both closed and encumbered; because of the network effect, achieving interoperability with such a product is both critical for any other vendor if it wishes to remain relevant in the market, difficult to accomplish because of lack of co-operation on equal terms with the original vendor, who may well see the new vendor as a potential competitor and threat.
The newer implementations rely on clean-room reverse engineering in the absence of technical data to achieve interoperability. The original vendors can provide such technical data to others in the name of'encouraging competition,' but such data is invariably encumbered, may be of limited use. Availability of such data is not equivalent to an open standard, because: The data is provided by the original vendor on a discretionary basis, who has every interest in blocking the effective implementation of competing solutions, may subtly alter or change its product in newer revisions, so that competitors' implementations are but not quite interoperable, leading customers to consider them unreliable or of a lower quality; these changes can either not be passed on to other vendors at all, or passed on after a strategic delay, maintaining the market dominance of the original vendor. The data itself may be encumbered, e.g. by patents or pricing, leading to a dependence of all competing solutions on the original vendor, leading a revenue stream from the competitors' customers back to the original vendor.
This revenue stream is only a result of the origina