In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size for storing and transmitting content; the different versions of the photo of the cat to the right show how higher degrees of approximation create coarser images as more details are removed. This is opposed to lossless data compression; the amount of data reduction possible using lossy compression is much higher than through lossless techniques. Well-designed lossy compression technology reduces file sizes before degradation is noticed by the end-user; when noticeable by the user, further data reduction may be desirable. Lossy compression is most used to compress multimedia data in applications such as streaming media and internet telephony. By contrast, lossless compression is required for text and data files, such as bank records and text articles, it can be advantageous to make a master lossless file which can be used to produce additional copies from.
This allows one to avoid basing new compressed copies off of a lossy source file, which would yield additional artifacts and further unnecessary information loss. It is possible to compress many types of digital data in a way that reduces the size of a computer file needed to store it, or the bandwidth needed to transmit it, with no loss of the full information contained in the original file. A picture, for example, is converted to a digital file by considering it to be an array of dots and specifying the color and brightness of each dot. If the picture contains an area of the same color, it can be compressed without loss by saying "200 red dots" instead of "red dot, red dot...... red dot." The original data contains a certain amount of information, there is a lower limit to the size of file that can carry all the information. Basic information theory says; when data is compressed, its entropy increases, it cannot increase indefinitely. As an intuitive example, most people know that a compressed ZIP file is smaller than the original file, but compressing the same file will not reduce the size to nothing.
Most compression algorithms can recognize when further compression would be pointless and would in fact increase the size of the data. In many cases, files or data streams contain more information than is needed for a particular purpose. For example, a picture may have more detail than the eye can distinguish when reproduced at the largest size intended. Developing lossy compression techniques as matched to human perception as possible is a complex task. Sometimes the ideal is a file that provides the same perception as the original, with as much digital information as possible removed; the terms'irreversible' and'reversible' are preferred over'lossy' and'lossless' for some applications, such as medical image compression, to circumvent the negative implications of'loss'. The type and amount of loss can affect the utility of the images. Artifacts or undesirable effects of compression may be discernible yet the result still useful for the intended purpose. Or lossy compressed images may be'visually lossless', or in the case of medical images, so-called Diagnostically Acceptable Irreversible Compression may have been applied.
More some forms of lossy compression can be thought of as an application of transform coding – in the case of multimedia data, perceptual coding: it transforms the raw data to a domain that more reflects the information content. For example, rather than expressing a sound file as the amplitude levels over time, one may express it as the frequency spectrum over time, which corresponds more to human audio perception. While data reduction is a main goal of transform coding, it allows other goals: one may represent data more for the original amount of space – for example, in principle, if one starts with an analog or high-resolution digital master, an MP3 file of a given size should provide a better representation than a raw uncompressed audio in WAV or AIFF file of the same size; this is because uncompressed audio can only reduce file size by lowering bit rate or depth, whereas compressing audio can reduce size while maintaining bit rate and depth. This compression becomes a selective loss of the least significant data, rather than losing data across the board.
Further, a transform coding may provide a better domain for manipulating or otherwise editing the data – for example, equalization of audio is most expressed in the frequency domain rather than in the raw time domain. From this point of view, perceptual encoding is not about discarding data, but rather about a better representation of data. Another use is for backward compatibility and graceful degradation: in color television, encoding color via a luminance-chrominance transform domain means that black-and-white sets display the luminance, while ignoring the color information. Another example is chroma subsampling: the use of color spaces such as YIQ, used in NTSC, allow one to reduce the resolution on the components to accord with human perception – humans have highest resolution for black-an
Microsoft Corporation is an American multinational technology company with headquarters in Redmond, Washington. It develops, licenses and sells computer software, consumer electronics, personal computers, related services, its best known software products are the Microsoft Windows line of operating systems, the Microsoft Office suite, the Internet Explorer and Edge web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface lineup of touchscreen personal computers; as of 2016, it is the world's largest software maker by revenue, one of the world's most valuable companies. The word "Microsoft" is a portmanteau of "microcomputer" and "software". Microsoft is ranked No. 30 in the 2018 Fortune 500 rankings of the largest United States corporations by total revenue. Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800, it rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Microsoft Windows.
The company's 1986 initial public offering, subsequent rise in its share price, created three billionaires and an estimated 12,000 millionaires among Microsoft employees. Since the 1990s, it has diversified from the operating system market and has made a number of corporate acquisitions, their largest being the acquisition of LinkedIn for $26.2 billion in December 2016, followed by their acquisition of Skype Technologies for $8.5 billion in May 2011. As of 2015, Microsoft is market-dominant in the IBM PC-compatible operating system market and the office software suite market, although it has lost the majority of the overall operating system market to Android; the company produces a wide range of other consumer and enterprise software for desktops and servers, including Internet search, the digital services market, mixed reality, cloud computing and software development. Steve Ballmer replaced Gates as CEO in 2000, envisioned a "devices and services" strategy; this began with the acquisition of Danger Inc. in 2008, entering the personal computer production market for the first time in June 2012 with the launch of the Microsoft Surface line of tablet computers.
Since Satya Nadella took over as CEO in 2014, the company has scaled back on hardware and has instead focused on cloud computing, a move that helped the company's shares reach its highest value since December 1999. In 2018, Microsoft surpassed Apple as the most valuable publicly traded company in the world after being dethroned by the tech giant in 2010. Childhood friends Bill Gates and Paul Allen sought to make a business utilizing their shared skills in computer programming. In 1972 they founded their first company, named Traf-O-Data, which sold a rudimentary computer to track and analyze automobile traffic data. While Gates enrolled at Harvard, Allen pursued a degree in computer science at Washington State University, though he dropped out of school to work at Honeywell; the January 1975 issue of Popular Electronics featured Micro Instrumentation and Telemetry Systems's Altair 8800 microcomputer, which inspired Allen to suggest that they could program a BASIC interpreter for the device. After a call from Gates claiming to have a working interpreter, MITS requested a demonstration.
Since they didn't yet have one, Allen worked on a simulator for the Altair while Gates developed the interpreter. Although they developed the interpreter on a simulator and not the actual device, it worked flawlessly when they demonstrated the interpreter to MITS in Albuquerque, New Mexico. MITS agreed to distribute it, marketing it as Altair BASIC. Gates and Allen established Microsoft on April 4, 1975, with Gates as the CEO; the original name of "Micro-Soft" was suggested by Allen. In August 1977 the company formed an agreement with ASCII Magazine in Japan, resulting in its first international office, "ASCII Microsoft". Microsoft moved to a new home in Bellevue, Washington in January 1979. Microsoft entered the operating system business in 1980 with its own version of Unix, called Xenix. However, it was MS-DOS. After negotiations with Digital Research failed, IBM awarded a contract to Microsoft in November 1980 to provide a version of the CP/M OS, set to be used in the upcoming IBM Personal Computer.
For this deal, Microsoft purchased a CP/M clone called 86-DOS from Seattle Computer Products, which it branded as MS-DOS, though IBM rebranded it to PC DOS. Following the release of the IBM PC in August 1981, Microsoft retained ownership of MS-DOS. Since IBM had copyrighted the IBM PC BIOS, other companies had to reverse engineer it in order for non-IBM hardware to run as IBM PC compatibles, but no such restriction applied to the operating systems. Due to various factors, such as MS-DOS's available software selection, Microsoft became the leading PC operating systems vendor; the company expanded into new markets with the release of the Microsoft Mouse in 1983, as well as with a publishing division named Microsoft Press. Paul Allen resigned from Microsoft in 1983 after developing Hodgkin's disease. Allen claimed that Gates wanted to dilute his share in the company when he was diagnosed with Hodgkin's disease because he didn't think he was working hard enough. After leaving Microsoft, Allen lost billions of dollars on ill-conceived or mistimed technology investments.
He invested in low-tech sectors, sports teams, commercial real estate. Despite having begun jointly developing a new operating system, OS/2, with IBM in
In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may provide an isolated measurement such as an electronic device that converts an input analog voltage or current to a digital number representing the magnitude of the voltage or current; the digital output is a two's complement binary number, proportional to the input, but there are other possibilities. There are several ADC architectures. Due to the complexity and the need for matched components, all but the most specialized ADCs are implemented as integrated circuits. A digital-to-analog converter performs the reverse function. An ADC converts a continuous-time and continuous-amplitude analog signal to a discrete-time and discrete-amplitude digital signal; the conversion involves quantization of the input, so it introduces a small amount of error or noise. Furthermore, instead of continuously performing the conversion, an ADC does the conversion periodically, sampling the input, limiting the allowable bandwidth of the input signal.
The performance of an ADC is characterized by its bandwidth and signal-to-noise ratio. The bandwidth of an ADC is characterized by its sampling rate; the SNR of an ADC is influenced by many factors, including the resolution and accuracy, aliasing and jitter. The SNR of an ADC is summarized in terms of its effective number of bits, the number of bits of each measure it returns that are on average not noise. An ideal ADC has an ENOB equal to its resolution. ADCs are required SNR of the signal to be digitized. If an ADC operates at a sampling rate greater than twice the bandwidth of the signal per the Nyquist–Shannon sampling theorem, perfect reconstruction is possible; the presence of quantization error limits the SNR of an ideal ADC. However, if the SNR of the ADC exceeds that of the input signal, its effects may be neglected resulting in an perfect digital representation of the analog input signal; the resolution of the converter indicates the number of discrete values it can produce over the range of analog values.
The resolution determines the magnitude of the quantization error and therefore determines the maximum possible average signal-to-noise ratio for an ideal ADC without the use of oversampling. The values are stored electronically in binary form, so the resolution is expressed as the audio bit depth. In consequence, the number of discrete values available is assumed to be a power of two. For example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels; the values can represent the ranges depending on the application. Resolution can be defined electrically, expressed in volts; the change in voltage required to guarantee a change in the output code level is called the least significant bit voltage. The resolution Q of the ADC is equal to the LSB voltage; the voltage resolution of an ADC is equal to its overall voltage measurement range divided by the number of intervals: Q = E F S R 2 M, where M is the ADC's resolution in bits and EFSR is the full scale voltage range.
EFSR is given by E F S R = V R e f H i − V R e f L o w, where VRefHi and VRefLow are the upper and lower extremes of the voltages that can be coded. The number of voltage intervals is given by N = 2 M, where M is the ADC's resolution in bits; that is, one voltage interval is assigned in between two consecutive code levels. Example: Coding scheme as in figure 1 Full scale measurement range = 0 to 1 volt ADC resolution is 3 bits: 23 = 8 quantization levels ADC voltage resolution, Q = 1 V / 8 = 0.125 V. In many cases, the useful resolution of a converter is limited by the signal-to-noise ratio and other errors in the overall system expressed as an ENOB. Quantization error is introduced by quantization in an ideal ADC, it is a rounding error between the analog input voltage to the output digitized value. The error is signal-dependent. In an ideal ADC, where the quantization error is uniformly distributed between −1/2 LSB and +1/2 LSB, the signal has a uniform distribution covering all quantization levels, the Signal-to-quantization-noise ratio is given by S Q N R = 20 log 10 ≈ 6.02 ⋅ Q d B Where Q is the number of quantization bits.
For example, for a 16-bit ADC, the quantization error is 96.3 dB below the maximum level. Quantization error is distributed from DC to the Nyquist frequency if part of the ADC's bandwidth is not used, as is the case
Entropy (information theory)
Information entropy is the average rate at which information is produced by a stochastic source of data. The measure of information entropy associated with each possible data value is the negative logarithm of the probability mass function for the value: S = − ∑ i P i log P i S=-\sum _P_\log; when the data source produces a low-probability value, the event carries more "information" than when the source data produces a high-probability value. The amount of information conveyed by each event defined in this way becomes a random variable whose expected value is the information entropy. Entropy refers to disorder or uncertainty, the definition of entropy used in information theory is directly analogous to the definition used in statistical thermodynamics; the concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication". The basic model of a data communication system is composed of three elements, a source of data, a communication channel, a receiver, – as expressed by Shannon – the "fundamental problem of communication" is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel.
The entropy provides an absolute limit on the shortest possible average length of a lossless compression encoding of the data produced by a source, if the entropy of the source is less than the channel capacity of the communication channel, the data generated by the source can be reliably communicated to the receiver. Information entropy is measured in bits or sometimes in "natural units" or decimal digits; the unit of the measurement depends on the base of the logarithm, used to define the entropy. The logarithm of the probability distribution is useful as a measure of entropy because it is additive for independent sources. For instance, the entropy of a fair coin toss is 1 bit, the entropy of m tosses is m bits. In a straightforward representation, log2 bits are needed to represent a variable that can take one of n values if n is a power of 2. If these values are probable, the entropy is equal to this number. If one of the values is more probable to occur than the others, an observation that this value occurs is less informative than if some less common outcome had occurred.
Conversely, rarer events provide more information. Since observation of less probable events occurs more the net effect is that the entropy received from non-uniformly distributed data is always less than or equal to log2. Entropy is zero; the entropy quantifies these considerations when a probability distribution of the source data is known. The meaning of the events observed. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves; the basic idea of information theory is that the more one knows about a topic, the less new information one is apt to get about it. If an event is probable, it is no surprise when it happens and provides little new information. Inversely, if the event was improbable, it is much more informative; the information content is an increasing function of the reciprocal of the probability of the event. If more events may happen, entropy measures the average information content you can expect to get if one of the events happens.
This implies that casting a die has more entropy than tossing a coin because each outcome of the die has smaller probability than each outcome of the coin. Entropy is a measure of unpredictability of the state, or equivalently, of its average information content. To get an intuitive understanding of these terms, consider the example of a political poll; such polls happen because the outcome of the poll is not known. In other words, the outcome of the poll is unpredictable, performing the poll and learning the results gives some new information. Now, consider the case that the same poll is performed a second time shortly after the first poll. Since the result of the first poll is known, the outcome of the second poll can be predicted well and the results should not contain much new information. Consider the example of a coin toss. Assuming the probability of heads is the same as the probability of tails the entropy of the coin toss is as high as it could be. There is no way to predict the outcome of the coin toss ahead of time: if one has to choose, the best one can do is predict that the coin will come up heads, this prediction will be correct with probability 1/2.
Such a coin toss has one bit of entropy since there are two possible outcomes that occur with equal probability, learning the actual outcome contains one bit of information. In contrast, a coin toss using a coin that has two heads and no tails has zero entropy since the coin will always come up heads, the outcome can be predicted pe
Macro (computer science)
A macro in computer science is a rule or pattern that specifies how a certain input sequence should be mapped to a replacement output sequence according to a defined procedure. The mapping process that instantiates a macro use into a specific sequence is known as macro expansion. A facility for writing macros may be provided as part of a software application or as a part of a programming language. In the former case, macros are used to make tasks using the application less repetitive. In the latter case, they are a tool that allows a programmer to enable code reuse or to design domain-specific languages. Macros are used to make a sequence of computing instructions available to the programmer as a single program statement, making the programming task less tedious and less error-prone. Macros allow positional or keyword parameters that dictate what the conditional assembler program generates and have been used to create entire programs or program suites according to such variables as operating system, platform or other factors.
The term derives from "macro instruction", such expansions were used in generating assembly language code. Keyboard macros and mouse macros allow short sequences of keystrokes and mouse actions to transform into other more time-consuming, sequences of keystrokes and mouse actions. In this way used or repetitive sequences of keystrokes and mouse movements can be automated. Separate programs for creating these macros are called macro recorders. During the 1980s, macro programs – SmartKey SuperKey, KeyWorks, Prokey – were popular, first as a means to automatically format screenplays for a variety of user input tasks; these programs were based on the TSR mode of operation and applied to all keyboard input, no matter in which context it occurred. They have to some extent fallen into obsolescence following the advent of mouse-driven user interface and the availability of keyboard and mouse macros in applications such as word processors and spreadsheets, making it possible to create application-sensitive keyboard macros.
Keyboard macros have in more recent times come to life as a method of exploiting the economy of massively multiplayer online role-playing games. By tirelessly performing a boring, but low risk action, a player running a macro can earn a large amount of the game's currency or resources; this effect is larger when a macro-using player operates multiple accounts or operates the accounts for a large amount of time each day. As this money is generated without human intervention, it can upset the economy of the game. For this reason, use of macros is a violation of the TOS or EULA of most MMORPGs, administrators of MMORPGs fight a continual war to identify and punish macro users. Keyboard and mouse macros that are created using an application's built-in macro features are sometimes called application macros, they are created by letting the application record the actions. An underlying macro programming language, most a scripting language, with direct access to the features of the application may exist.
The programmers' text editor, follows this idea to a conclusion. In effect, most of the editor is made of macros. Emacs was devised as a set of macros in the editing language TECO. Another programmers' text editor, Vim has full implementation of macros, it can record into a register what a person types on the keyboard and it can be replayed or edited just like VBA macros for Microsoft Office. Vim has a scripting language called Vimscript to create macros. Visual Basic for Applications is a programming language included in Microsoft Office from Office 97 through Office 2019. However, its function has evolved from and replaced the macro languages that were included in some of these applications. VBA executes when documents are opened; this makes it easy to write computer viruses in VBA known as macro viruses. In the mid-to-late 1990s, this became one of the most common types of computer virus. However, during the late 1990s and to date, Microsoft has been updating their programs. In addition, current anti-virus programs counteract such attacks.
A parameterized macro is a macro, able to insert given objects into its expansion. This gives the macro some of the power of a function; as a simple example, in the C programming language, this is a typical macro, not a parameterized macro: #define PI 3.14159 This causes the string "PI" to be replaced with "3.14159" wherever it occurs. It will always be replaced by this string, the resulting string cannot be modified in any way. An example of a parameterized macro, on the other hand, is this: #define pred What this macro expands to depends on what argument x is passed to it. Here are some possible expansions: pred → pred → pred → Parameterized macros are a useful source-level mechanism for performing in-line expansion, but in languages such as C where they use simple textual substitution, they have a number of severe disadvantages over other mechanisms for performing in-line expansion, such as inline functions; the parameterized macros used in languages such
Metadata is "data that provides information about other data". Many distinct types of metadata exist, among these descriptive metadata, structural metadata, administrative metadata, reference metadata and statistical metadata. Descriptive metadata describes a resource for purposes such as identification, it can include elements such as title, abstract and keywords. Structural metadata is metadata about containers of data and indicates how compound objects are put together, for example, how pages are ordered to form chapters, it describes the types, versions and other characteristics of digital materials. Administrative metadata provides information to help manage a resource, such as when and how it was created, file type and other technical information, who can access it. Reference metadata describes the contents and quality of statistical data Statistical metadata may describe processes that collect, process, or produce statistical data. Metadata was traditionally used in the card catalogs of libraries until the 1980s, when libraries converted their catalog data to digital databases.
In the 2000s, as digital formats were becoming the prevalent way of storing data and information, metadata was used to describe digital data using metadata standards. The first description of "meta data" for computer systems is purportedly noted by MIT's Center for International Studies experts David Griffel and Stuart McIntosh in 1967: "In summary we have statements in an object language about subject descriptions of data and token codes for the data. We have statements in a meta language describing the data relationships and transformations, ought/is relations between norm and data."There are different metadata standards for each different discipline. Describing the contents and context of data or data files increases its usefulness. For example, a web page may include metadata specifying what software language the page is written in, what tools were used to create it, what subjects the page is about, where to find more information about the subject; this metadata can automatically improve the reader's experience and make it easier for users to find the web page online.
A CD may include metadata providing information about the musicians and songwriters whose work appears on the disc. A principal purpose of metadata is to help users discover resources. Metadata helps to organize electronic resources, provide digital identification, support the archiving and preservation of resources. Metadata assists users in resource discovery by "allowing resources to be found by relevant criteria, identifying resources, bringing similar resources together, distinguishing dissimilar resources, giving location information." Metadata of telecommunication activities including Internet traffic is widely collected by various national governmental organizations. This data can be used for mass surveillance. In many countries, the metadata relating to emails, telephone calls, web pages, video traffic, IP connections and cell phone locations are stored by government organizations. Metadata means "data about data". Although the "meta" prefix means "after" or "beyond", it is used to mean "about" in epistemology.
Metadata is defined as the data providing information about one or more aspects of the data. Some examples include:Means of creation of the data Purpose of the data Time and date of creation Creator or author of the data Location on a computer network where the data was created Standards used File size Data quality Source of the data Process used to create the dataFor example, a digital image may include metadata that describes how large the picture is, the color depth, the image resolution, when the image was created, the shutter speed, other data. A text document's metadata may contain information about how long the document is, who the author is, when the document was written, a short summary of the document. Metadata within web pages can contain descriptions of page content, as well as key words linked to the content; these links are called "Metatags", which were used as the primary factor in determining order for a web search until the late 1990s. The reliance of metatags in web searches was decreased in the late 1990s because of "keyword stuffing".
Metatags were being misused to trick search engines into thinking some websites had more relevance in the search than they did. Metadata can be stored and managed in a database called a metadata registry or metadata repository. However, without context and a point of reference, it might be impossible to identify metadata just by looking at it. For example: by itself, a database containing several numbers, all 13 digits long could be the results of calculations or a list of numbers to plug into an equation - without any other context, the numbers themselves can be perceived as the data, but if given the context that this database is a log of a book collection, those 13-digit numbers may now be identified as ISBNs - information that refers to the book, but is not itself the information within the book. The term "metadata" was coined in 1968 by Philip Bagley, in his book "Extension of Programming Language Concepts" where it is clear that he uses the term in the ISO 11179 "traditional" sense, "structural metadata" i.e. "data about the containers of data".
A word processor is a computer program or device that provides for input, editing and output of text plus other features. Early word processors were stand-alone devices dedicated to the function, but current word processors are word processor programs running on general purpose computers; the functions of a word processor program fall somewhere between those of a simple text editor and a functioned desktop publishing program. However the distinctions between these three have changed over time, are somewhat unclear. From the outset, word processors did not develop out of computer technology. Rather, they evolved from the needs of writers; the history of word processing is the story of the gradual automation of the physical aspects of writing and editing, to the refinement of the technology to make it available to corporations and Individuals. The term word processing burst into American offices in early 1970s centered on the idea of reorganizing typists, but the meaning soon shifted toward automated text editing.
At first, the designers of word processing systems combined existing technologies with emerging ones to develop stand-alone equipment, creating a new business distinct from the emerging world of the personal computer. The concept of word processing arose from the more general data processing, which since the 1950s had been the application of computers to business administration. Through history, there have been 3 types of word processors: mechanical and software; the first word processing device was patented by Henry Mill for a machine, capable of "writing so and you could not distinguish it from a printing press". More than a century another patent appeared in the name of William Austin Burt for the typographer. In the late 19th century, Christopher Latham Sholes created the first recognizable typewriter that although it was a large size, described as a "literary piano"; these mechanical systems could not “process text” beyond changing the position of type, re-fill empty spaces or jump lines.
It was not until decades that the introduction of electricity and electronics into typewriters began to help the writer with the mechanical part. The term “word processing” itself was created in the 1950s by Ulrich Steinhilper, a German IBM typewriter sales executive. However, it did not make its appearance in 1960s office management or computing literatures, though many of the ideas and technologies to which it would be applied were well known, but by 1971 the term was recognized by the New York Times as a business "buzz word". Word processing paralleled the more general "data processing", or the application of computers to business administration, thus by 1972 discussion of word processing was common in publications devoted to business office management and technology, by the mid-1970s the term would have been familiar to any office manager who consulted business periodicals. By the late 1960s, IBM had developed the IBM MT/ST; this was a model of the IBM Selectric typewriter from the earlier part of this decade, but built into its own desk, integrated with magnetic tape recording and playback facilities, with controls and a bank of electrical relays.
The MT/ST automated word wrap. This device allowed rewriting text, written on another tape and you could collaborate, it was a revolution for the word processing industry. In 1969 the tapes were replaced by magnetic cards; these memory cards were introduced in the side of an extra device that accompanied the MT/ST, able to read and record the work. In the early 1970s, word processing became computer-based with the development of several innovations. Just before the arrival of the Personal Computer, IBM developed the "floppy disk". In the early 1970s word-processing systems with a CRT screen display editing were designed. At this time these stand-alone word processing systems were designed and marketed by several pioneering companies. Linolex Systems was founded in 1970 by Robert Oleksiak. Linolex based its technology on floppy drives and software, it was a computer-based system for application in the word processing businesses and it sold systems through its own sales force. With a base of installed systems in 500 plus customer sites, Linolex Systems sold 3 million units in 1975 — a year before Apple Computer, was first incorporated in 1976.
At this time, Lexitron Corporation produced a series of dedicated word processing microcomputers. Lexitron was the first to use a full size video display screen in its models by 1978. Lexitron used 5-1/4 inch floppy diskettes, which were the standard in the personal computer field; the program disk was inserted in one drive, the system booted up. The data diskette was put in the second drive; the operating system and the word processing program were combined in one program. Another of the early word processing adopters was Vydec, which created in 1973, the first modern text processor, the “Vydec Word Processing System”, it had built-in multiple functions like the ability to print it. The Vydec Word Processing System sold for $12,000 at the time; the Redactron Corporation designed and manufactured editing systems, including correcting/editing typewriters and card units, a word processor called the Data Secretary. The Burrough