Media player (software)
A media player is a computer program/software for playing multimedia files like audios, videos and music. Media players display standard media control icons known from physical devices such as tape recorders and CD players, such as play, fastforward and stop buttons. In addition, they have progress bars to locate the current position in the duration of the media file. Mainstream operating systems have at least one built-in media player. For example, Windows comes with Windows Media Player while macOS comes with QuickTime Player and iTunes. Linux distributions may come with a media player, such as SMPlayer, Audacious, Banshee, MPlayer, Rhythmbox, Totem, VLC Media Player, xine. Android OS comes with Google Play Music as default media player and many apps like Poweramp, Beautiful Music Player, VLC Media Player. Different media players may have different goals and feature sets. Video players are a group of media players that have their features geared more towards playing digital video. For example, Windows DVD Player plays DVD-Video discs and nothing else.
Media Player Classic can play individual audio and video files but many of its features such as color correction, picture sharpening, set of hotkeys, DVB support and subtitle support are only useful for video material such as films and cartoons. Audio players, on the other hand, specialize in digital audio. For example, AIMP plays audio formats. MediaMonkey can play both audio and video format but many of its features including media library, lyric discovery, music visualization, online radio, audiobook indexing and tag editing are geared toward consumption of audio material. In addition, watching video files on it can be a trying feat. General-purpose media players do exist. For example, Windows Media Player has exclusive features for both audio and video material, although it cannot match the feature set of Media Player Classic and MediaMonkey combined. 3D video players are used to play 2D video in 3D format. A high-quality three-dimensional video presentation requires that each frame of a motion picture be embedded with information on the depth of objects present in the scene.
This process involves shooting the video with special equipment from two distinct perspectives or modelling and rendering each frame as a collection of objects composed of 3D vertices and textures, much like in any modern video game, to achieve special effects. Tedious and costly, this method is only used in a small fraction of movies produced worldwide, while most movies remain in the form of traditional 2D images, it is, possible to give an otherwise two-dimensional picture the appearance of depth. Using a technique known as anaglyph processing a "flat" picture can be transformed so as to give an illusion of depth when viewed through anaglyph glasses. An image viewed through anaglyph glasses appears to have both protruding and embedded objects in it, at the expense of somewhat distorted colours; the method itself is old enough, dating back to mid-19th century, but it is only with recent advances in computer technology that it has become possible to apply this kind of transformation to a series of frames in a motion picture reasonably fast or in real time, i.e. as the video is being played back.
Several implementations exist in the form of 3D video players that render conventional 2D video in anaglyph 3D, as well as in the form of 3D video converters that transform video into stereoscopic anaglyph and transcode it for playback with regular software or hardware video players. A home theater PC or media center computer is a convergence device that combines some or all the capabilities of a personal computer with a software application that supports video, audio playback, sometimes video recording functionality. Although computers with some of these capabilities were available from the late 1980s, the "Home Theater PC" term first appeared in mainstream press in 1996. Since 2007, other types of consumer electronics, including gaming systems and dedicated media devices have crossed over to manage video and music content; the term "media center" refers to specialized computer programs designed to run on standard personal computers. Comparison of video player software Comparison of audio player software
Newline is a control character or sequence of control characters in a character encoding specification, used to signify the end of a line of text and the start of a new one. Text editors set this special character; when displaying a text file, this control character causes the text editor to show the following characters in a new line. In the mid-1800s, long before the advent of teleprinters and teletype machines, Morse code operators or telegraphists invented and used Morse code prosigns to encode white space text formatting in formal written text messages. In particular the Morse prosign represented by the concatenation of two literal textual Morse code "A" characters sent without the normal inter-character spacing is used in Morse code to encode and indicate a new line in a formal text message. In the age of modern teleprinters standardized character set control codes were developed to aid in white space text formatting. ASCII was developed by the International Organization for Standardization and the American Standards Association, the latter being the predecessor organization to American National Standards Institute.
During the period of 1963 to 1968, the ISO draft standards supported the use of either CR+LF or LF alone as a newline, while the ASA drafts supported only CR+LF. The sequence CR+LF was used on many early computer systems that had adopted Teletype machines—typically a Teletype Model 33 ASR—as a console device, because this sequence was required to position those printers at the start of a new line; the separation of newline into two functions concealed the fact that the print head could not return from the far right to the beginning of the next line in time to print the next character. Any character printed after a CR would print as a smudge in the middle of the page while the print head was still moving the carriage back to the first position. "The solution was to make the newline two characters: CR to move the carriage to column one, LF to move the paper up." In fact, it was necessary to send extra characters—extraneous CRs or NULs—which are ignored but give the print head time to move to the left margin.
Many early video displays required multiple character times to scroll the display. On such systems, applications had to talk directly to the Teletype machine and follow its conventions since the concept of device drivers hiding such hardware details from the application was not yet well developed. Therefore, text was composed to satisfy the needs of Teletype machines. Most minicomputer systems from DEC used this convention. CP/M used it in order to print on the same terminals that minicomputers used. From there MS-DOS adopted CP/M's CR+LF in order to be compatible, this convention was inherited by Microsoft's Windows operating system; the Multics operating system used LF alone as its newline. Multics used a device driver to translate this character to whatever sequence a printer needed, the single byte was more convenient for programming. What seems like a more obvious choice—CR—was not used, as CR provided the useful function of overprinting one line with another to create boldface and strikethrough effects.
More the use of LF alone as a line terminator had been incorporated into drafts of the eventual ISO/IEC 646 standard. Unix followed the Multics practice, Unix-like systems followed Unix; the concepts of line feed and carriage return are associated, can be considered either separately or together. In the physical media of typewriters and printers, two axes of motion, "down" and "across", are needed to create a new line on the page. Although the design of a machine must consider them separately, the abstract logic of software can combine them together as one event; this is why a newline in character encoding can be defined as CR combined into one. Some character sets provide a separate newline character code. EBCDIC, for example, provides an NL character code in addition to the LF codes. Unicode, in addition to providing the ASCII CR and LF control codes provides a "next line" control code, as well as control codes for "line separator" and "paragraph separator" markers. Software applications and operating systems represent a newline with one or two control characters: EBCDIC systems—mainly IBM mainframe systems, including z/OS and i5/OS —use NL as the character combining the functions of line-feed and carriage-return.
The equivalent UNICODE character is called NEL. EBCDIC has control characters called CR and LF, but the numerical value of LF differs from the one used by ASCII. Additionally, some EBCDIC variants use NL but assign a different numeric code to the character. However, those operating systems use a record-based file system, which stores text files as one record per line. In most file formats, no line terminators are stored. Operating systems for the CDC 6000 series defined a newline as two or more zero-valued six-bit characters at the end of a 60-bit word; some configurations defined a zero-valued character as a colon character, with the result that multiple colons could be interpreted as a newline depending on position. RSX-11 and OpenVMS use a record-based file system, which stores text files as one record per line. In most file formats, no line terminators are stored, but the Record Management Services facility can transparently add a terminator to each line when it is retrieved by
Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech computer or speech synthesizer, can be implemented in software or hardware products. A text-to-speech system converts normal language text into speech. Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a "synthetic" voice output; the quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood clearly. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to listen to written words on a home computer. Many computer operating systems have included speech synthesizers since the early 1990s.
A text-to-speech system is composed of two parts: a back-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words; this process is called text normalization, pre-processing, or tokenization. The front-end assigns phonetic transcriptions to each word, divides and marks the text into prosodic units, like phrases and sentences; the process of assigning phonetic transcriptions to words is called text-to-phoneme or grapheme-to-phoneme conversion. Phonetic transcriptions and prosody information together make up the symbolic linguistic representation, output by the front-end; the back-end—often referred to as the synthesizer—then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of the target prosody, imposed on the output speech. Long before the invention of electronic signal processing, some people tried to build machines to emulate human speech.
Some early legends of the existence of "Brazen Heads" involved Pope Silvester II, Albertus Magnus, Roger Bacon. In 1779 the German-Danish scientist Christian Gottlieb Kratzenstein won the first prize in a competition announced by the Russian Imperial Academy of Sciences and Arts for models he built of the human vocal tract that could produce the five long vowel sounds. There followed the bellows-operated "acoustic-mechanical speech machine" of Wolfgang von Kempelen of Pressburg, described in a 1791 paper; this machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. In 1837, Charles Wheatstone produced a "speaking machine" based on von Kempelen's design, in 1846, Joseph Faber exhibited the "Euphonia". In 1923 Paget resurrected Wheatstone's design. In the 1930s Bell Labs developed the vocoder, which automatically analyzed speech into its fundamental tones and resonances. From his work on the vocoder, Homer Dudley developed a keyboard-operated voice-synthesizer called The Voder, which he exhibited at the 1939 New York World's Fair.
Dr. Franklin S. Cooper and his colleagues at Haskins Laboratories built the Pattern playback in the late 1940s and completed it in 1950. There were several different versions of this hardware device; the machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device, Alvin Liberman and colleagues discovered acoustic cues for the perception of phonetic segments. In 1975 MUSA was released, was one of the first Speech Synthesis systems, it consisted of a stand-alone computer hardware and a specialized software that enabled it to read Italian. A second version, released in 1978, was able to sing Italian in an "a cappella" style. Dominant systems in the 1980s and 1990s were the DECtalk system, based on the work of Dennis Klatt at MIT, the Bell Labs system. Early electronic speech-synthesizers sounded robotic and were barely intelligible; the quality of synthesized speech has improved, but as of 2016 output from contemporary speech synthesis systems remains distinguishable from actual human speech.
Kurzweil predicted in 2005 that as the cost-performance ratio caused speech synthesizers to become cheaper and more accessible, more people would benefit from the use of text-to-speech programs. The first computer-based speech-synthesis systems originated in the late 1950s. Noriko Umeda et al. developed the first general English text-to-speech system in 1968 at the Electrotechnical Laboratory, Japan. In 1961 physicist John Larry Kelly, Jr and his colleague Louis Gerstman used an IBM 704 computer to synthesize speech, an event among the most prominent in the history of Bell Labs. Kelly's voice recorder synthesizer recreated the song "Daisy Bell", with musical accompaniment from Max Mathews. Coincidentally, Arthur C. Clarke was visiting his friend and colleague John Pierce at the Bell Labs Murray Hill facility. Clarke was so impressed by the demonstration that he used it in the climactic scene of his screenplay for his novel 2001: A Space Odyssey, where the HAL 9000 computer sings the same song as astronaut Dave Bowman puts it to slee
A text editor is a type of computer program that edits plain text. Such programs are sometimes known following the naming of Microsoft Notepad. Text editors are provided with operating systems and software development packages, can be used to change files such as configuration files, documentation files and programming language source code. There are important differences between rich text. Plain text consists of character representation; each character is represented by a fixed-length sequence of one, two, or four bytes, or as a variable-length sequence of one to four bytes, in accordance to specific character encoding conventions, such as ASCII, ISO/IEC 2022, UTF-8, or Unicode. These conventions define many printable characters, but non-printing characters that control the flow of the text, such space, line break, page break. Plain text contains no other information about the text itself, not the character encoding convention employed. Plain text is stored in text files, although text files do not store plain text.
In the early days of computers, plain text was displayed using a monospace font, such that horizontal alignment and columnar formatting were sometimes done using whitespace characters. For compatibility reasons, this tradition has not changed. Rich text, on the other hand, may contain metadata, character formatting data, paragraph formatting data, page specification data. Rich text can be complex. Rich text can be saved in binary format, text files adhering to a markup language, or in a hybrid form of both. Text editors are intended to open and save text files containing either plain text or anything that can be interpreted as plain text, including the markup for rich text or the markup for something else. Before text editors existed, computer text was punched into cards with keypunch machines. Physical boxes of these thin cardboard cards were inserted into a card-reader. Magnetic tape and disk "card-image" files created from such card decks had no line-separation characters at all, assumed fixed-length 80-character records.
An alternative to cards was punched paper tape. It could be created by some teleprinters; the first text editors were "line editors" oriented to teleprinter- or typewriter-style terminals without displays. Commands effected edits to a file at an imaginary insertion point called the "cursor". Edits were verified by typing a command to print a small section of the file, periodically by printing the entire file. In some line editors, the cursor could be moved by commands that specified the line number in the file, text strings for which to search, regular expressions. Line editors were major improvements over keypunching; some line editors could be used by keypunch. Some common line editors supported a "verify" mode in which change commands displayed the altered lines; when computer terminals with video screens became available, screen-based text editors became common. One of the earliest full-screen editors was O26, written for the operator console of the CDC 6000 series computers in 1967. Another early full-screen editor was vi.
Written in the 1970s, it is still a standard editor on Linux operating systems. Written in the 1970s was the UCSD Pascal Screen Oriented Editor, optimized both for indented source code as well as general text. Emacs, one of the first free and open source software projects, is another early full-screen or real-time editor, one, ported to many systems. A full-screen editor's ease-of-use and speed motivated many early purchases of video terminals; the core data structure in a text editor is the one that manages the string or list of records that represents the current state of the file being edited. While the former could be stored in a single long consecutive array of characters, the desire for text editors that could more insert text, delete text, undo/redo previous edits led to the development of more complicated sequence data structures. A typical text editor uses a gap buffer, a linked list of lines, a piece table, or a rope, as its sequence data structure; some text editors are simple, while others offer broad and complex functions.
For example and Unix-like operating systems have the pico editor, but many include the vi and Emacs editors. Microsoft Windows systems come with the simple Notepad, though many people—especially programmers—prefer other editors with more features. Under Apple Macintosh's classic Mac OS there was the native SimpleText, replaced in Mac OS X by TextEdit, which combines features of a text editor with those typical of a word processor such as rulers and multiple font selection; these features are not available but must be switched by user command, or through the program automatically determining the file type. Most word processors can read and write files in plain text format, allowing them to open files saved from text editors. Saving these files from a word processor, requires ensuring the file is written in plain text format, that any text encoding or BOM settings won'
Thumbnails are reduced-size versions of pictures or videos, used to help in recognizing and organizing them, serving the same role for images as a normal text index does for words. In the age of digital images, visual search engines and image-organizing programs use thumbnails, as do most modern operating systems or desktop environments, such as Microsoft Windows, macOS, KDE and GNOME. On web pages, they avoid the need to download larger files unnecessarily. Thumbnails are ideally implemented on web pages as separate, smaller copies of the original image, in part because one purpose of a thumbnail image on a web page is to reduce bandwidth and download time; some web designers produce thumbnails with HTML or client-side scripting that makes the user's browser shrink the picture, rather than use a smaller copy of the image. This results in no saved bandwidth, the visual quality of browser resizing is less than ideal. Displaying a significant part of the picture instead of the full frame can allow the use of a smaller thumbnail while maintaining recognizability.
For example, when thumbnailing a full-body portrait of a person, it may be better to show the face reduced than an indistinct figure. However, this may mislead the viewer about what the image contains, so is more suited to artistic presentations than searching or catalogue browsing. In 2002, the court in the US case Kelly v. Arriba Soft Corporation ruled that it was fair use for Internet search engines to use thumbnail images to help web users find what they seek; the word "thumbnail" is a reference to the human thumbnail and alludes to the small size of the image or picture, comparable to the size of the human thumbnail. While the earliest use of the word in this sense dates back to the 17th century, the American Heritage Dictionary of Idioms is reported to have documented that the expression first appears in the mid-19th century to refer to'a drawing the size of the thumbnail'; the word was used figuratively, in both noun and adjective form, to refer to anything small or concise, such as a biographical essay.
The use of the word "thumbnail" in the specific context of computer images as'a small graphical representation, as of a larger graphic, a page layout, etc.' Appears to have been first used in the 1980s. The Denver Public Library Digitization and Cataloguing Program produces thumbnails that are 160 pixels in the long dimension; the California Digital Library Guidelines for Digital Images recommend 150-200 pixels for each dimension. Picture Australia requires thumbnails to be 150 pixels in the long dimension; the International Dunhuang Project Standards for Digitization and Image Management specifies a height of 96 pixels at 72 ppi. DeviantArt automatically produces thumbnails. Flickr automatically produces thumbnails that are a maximum 240 pixels in the long dimension, or smaller 75×75 pixels, it applies unsharp mask to them. Picasa automatically produces thumbnails that are a maximum 144 pixels in the long dimension, or 160×160 pixels album thumbnails; the term vignette is sometimes used to describe an image, smaller than the original, larger than a thumbnail, but no more than 250 pixels in the long dimension.
Art directors, storyboard artists and graphic designers, as well as other kinds of visual artists, use the term "thumbnail sketch" to describe a small drawing on paper used to explore multiple ideas quickly. Thumbnail sketches may include as much detail as a small sketch. A “comprehensive” thumbnail sketch of a printed project, more or less to final size, is referred to as a “comp”, can be detailed with production information. Image organizer Contact print, a film cognate of the thumbnail Thumbshot
A hex editor is a type of computer program that allows for manipulation of the fundamental binary data that constitutes a computer file. The name'hex' comes from'hexadecimal': a standard numerical format for representing binary data. A typical computer file occupies multiple areas on the platter of a disk drive, whose contents are combined to form the file. Hex editors that are designed to parse and edit sector data from the physical segments of floppy or hard disks are sometimes called sector editors or disk editors. With a hex editor, a user can see or edit the raw and exact contents of a file, as opposed to the interpretation of the same content that other, higher level application software may associate with the file format. For example, this could be raw image data, in contrast to the way image editing software would interpret and show the same file. Hex editors may be used to correct data corrupted by system or application program problems where it may not be worthwhile to write a special program to make the corrections.
They are useful to bypass application edit checks. They have been used to "patch" executable programs to change or add a few instructions as an alternative to recompilation. Program fixes for IBM mainframe systems are sometimes distributed as patches rather than distributing a complete copy of the affected program. In most hex editor applications, the data of the computer file is represented as hexadecimal values grouped in 4 groups of 4 bytes, followed by one group of 16 printable ASCII characters which correspond to each pair of hex values. Non-printable ASCII characters and characters that would take more than one character space are represented by a dot in the following ASCII field. Since the invention of computers and their different uses, a variety of file formats has been created. For some, it was convenient to be able to access the data as a series of raw digits. A program called SUPERZAP was available for IBM OS/360 systems which could edit raw disk records and understood the format of executable files.
Pairs of hexadecimal digits are the current standard, because the vast majority of machines and file formats in use today handle data in units or groups of 8-bit bytes. Hexadecimal and octal are common because these digits allow one to see which bits in a byte are set. Today, decimal instead of hexadecimal representation is becoming a popular second option due to the more familiar number base and additional helper tools, such as template systems and data inspectors, that reduce the benefits of the hexadecimal numerical format; some hex editors offer a template system that can present the sequence of bytes of a binary file in a structured way, covering part or all of the desired file format. The GUI for a template is a separate tool window next to the main hex editor; some cheat engine systems consist only of such a template GUI. A template is represented as a list of labeled text boxes, such that individual values of a file can be edited in the appropriate format. Without template support, it is necessary to find the right offset in a file where the value, to be changed is stored.
Raw hex editing may require conversion from hexadecimal to decimal, catering for byte order, or other data type conversion peculiarities. Templates can be stored as files, thereby exchanged by users, are shared publicly over the manufacturer's website. Most if not all hex editors define their own template file format. Advanced hex editors have scripting systems that let the user create macro like functionality as a sequence of user interface commands for automating common tasks; this can be used for providing scripts that automatically patch files or to write more complex/intelligent templates. Scripting languages vary often being product specific languages resembling MS-DOS batch files, to systems that support fully-fledged scripting languages such as Lua or Python. A few select editors have a plugin system that allows to extend the GUI and add new functionality loading dynamic link libraries written in a C-compatible language. Comparison of hex editors Disk editor Hex dump Hexadecimal Octal The Linux Information Project.
"Hex Editor Definition". Retrieved 2010-05-30
A computer file is a computer resource for recording data discretely in a computer storage device. Just as words can be written to paper, so can information be written to a computer file. Files can be transferred through the internet. There are different types of computer files, designed for different purposes. A file may be designed to store a picture, a written message, a video, a computer program, or a wide variety of other kinds of data; some types of files can store several types of information at once. By using computer programs, a person can open, change and close a computer file. Computer files may be reopened and copied an arbitrary number of times. Files are organised in a file system, which keeps track of where the files are located on disk and enables user access; the word "file" derives from the Latin filum."File" was used in the context of computer storage as early as January 1940. In Punched Card Methods in Scientific Computation, W. J. Eckert stated, "The first extensive use of the early Hollerith Tabulator in astronomy was made by Comrie.
He used it for building a table from successive differences, for adding large numbers of harmonic terms". "Tables of functions are constructed from their differences with great efficiency, either as printed tables or as a file of punched cards." In February 1950: In a Radio Corporation of America advertisement in Popular Science Magazine describing a new "memory" vacuum tube it had developed, RCA stated: "the results of countless computations can be kept'on file' and taken out again. Such a'file' now exists in a'memory' tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones - speeds intelligent solutions through mazes of mathematics." In 1952, "file" denoted, information stored on punched cards. In early use, the underlying hardware, rather than the contents stored on it, was denominated a "file". For example, the IBM 350 disk drives were denominated "disk files"; the introduction, circa 1961, by the Burroughs MCP and the MIT Compatible Time-Sharing System of the concept of a "file system" that managed several virtual "files" on one storage device is the origin of the contemporary denotation of the word.
Although the contemporary "register file" demonstrates the early concept of files, its use has decreased. On most modern operating systems, files are organized into one-dimensional arrays of bytes; the format of a file is defined by its content since a file is a container for data, although, on some platforms the format is indicated by its filename extension, specifying the rules for how the bytes must be organized and interpreted meaningfully. For example, the bytes of a plain text file are associated with either ASCII or UTF-8 characters, while the bytes of image and audio files are interpreted otherwise. Most file types allocate a few bytes for metadata, which allows a file to carry some basic information about itself; some file systems can store arbitrary file-specific data outside of the file format, but linked to the file, for example extended attributes or forks. On other file systems this can be done via software-specific databases. All those methods, are more susceptible to loss of metadata than are container and archive file formats.
At any instant in time, a file might have a size expressed as number of bytes, that indicates how much storage is associated with the file. In most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a physical storage device. In such systems, software employed other methods to track the exact byte count; the general definition of a file does not require that its size have any real meaning, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file. For example, the file to which the link /bin/ls points in a typical Unix-like system has a defined size that changes. Compare this with /dev/null, a file, but its size may be obscure. Information in a computer file can consist of smaller packets of information that are individually different but share some common traits. For example, a payroll file might contain information concerning all the employees in a company and their payroll details.
A text file may contain lines of corresponding to printed lines on a piece of paper. Alternatively, a file may contain an arbitrary binary image or it may contain an executable; the way information is grouped into a file is up to how it is designed. This has led to a plethora of more or less standardized file structures for all imaginable purposes, from the simplest to the most complex. Most computer files are used by computer programs which create, modify or delete the files for their own use on an as-needed basis; the programmers who create the programs decide what files are needed, how they are to be used and their names. In some cases, computer pr