Bluecurve is a desktop theme for GNOME and KDE created by the Red Hat Artwork project. The main aim of Bluecurve was to create a consistent look throughout the Linux environment, provide support for various Freedesktop.org desktop standards. It has been used in Red Hat Linux since version 8.0, Fedora Core. The Bluecurve window borders and GTK+ theme have been replaced by those from Clearlooks. However, the old Bluecurve themes are still installed by default and can be selected in the theme manager; the Bluecurve icon set has been replaced as the default by Echo. There has been controversy surrounding the theme the alterations to KDE, which were sufficiently severe as to cause developer Bernhard Rosenkraenzer to quit Red Hat "mostly in mutual agreement — I don't want to work on crippling KDE, they don't want an employee who admits RHL 8.0's KDE is crippleware." Others criticize it for giving the same look to both desktops though they are different in many ways. This approach was subsequently emulated by Mandrake Linux with their "Galaxy" theme, available for GNOME and KDE, in Kubuntu 6.06 with the GTK-Qt theme engine.
Enterprising GUI artists have created themes that emulate the Bluecurve theme on other operating systems, including Windows. Users can replace their default Windows icons with icons that emulate Bluecurve, using the IconPackager application. One such set can be downloaded at WinCustomize. Computer icon Crystal - LGPL icon set by Everaldo Coelho Nuvola - LGPL icon set by David Vignoni Oxygen Project - LGPL icon set for KDE Palette QtCurve Tango Desktop Project - developers of a public domain icon set Theme Fedora Artwork Waikato Linux Users Group wiki article Bluecurve icon pack for IconPackager
Western Europe is the region comprising the western part of Europe. Though the term Western Europe is used, there is no agreed-upon definition of the countries that it encompasses. Significant historical events that have shaped the concept of Western Europe include the rise of Rome, the adoption of Greek culture during the Roman Republic, the adoption of Christianity by Roman Emperors, the division of the Latin West and Greek East, the Fall of the Western Roman Empire, the reign of Charlemagne, the Viking invasions, the East–West Schism, the Black Death, the Renaissance, the Age of Discovery, the Protestant Reformation as well as the Counter-Reformation of the Catholic Church, the Age of Enlightenment, the French Revolution, the Industrial Revolution, the two world wars, the Cold War, the formation of the North Atlantic Treaty Organization and the expansion of the European Union. Prior to the Roman conquest, a large part of Western Europe had adopted the newly developed La Tène culture; as the Roman domain expanded, a cultural and linguistic division appeared between the Greek-speaking eastern provinces, which had formed the urbanized Hellenistic civilization, the western territories, which in contrast adopted the Latin language.
This cultural and linguistic division was reinforced by the political east-west division of the Roman Empire. The Western Roman Empire and the Eastern Roman Empire controlled the two divergent regions between the 3rd and the 5th centuries; the division between these two was enhanced during Late antiquity and the Middle Ages by a number of events. The Western Roman Empire collapsed. By contrast, the Eastern Roman Empire known as the Greek or Byzantine Empire and thrived for another 1000 years; the rise of the Carolingian Empire in the west, in particular the Great Schism between Eastern Orthodoxy and Roman Catholicism, enhanced the cultural and religious distinctiveness between Eastern and Western Europe. After the conquest of the Byzantine Empire, center of the Eastern Orthodox Church, by the Muslim Ottoman Empire in the 15th century, the gradual fragmentation of the Holy Roman Empire, the division between Roman Catholic and Protestant became more important in Europe than that with Eastern Orthodoxy.
In East Asia, Western Europe was known as taixi in China and taisei in Japan, which translates as the "Far West". The term Far West became synonymous with Western Europe in China during the Ming dynasty; the Italian Jesuit priest Matteo Ricci was one of the first writers in China to use the Far West as an Asian counterpart to the European concept of the Far East. In Ricci's writings, Ricci referred to himself as "Matteo of the Far West"; the term was still in use in the late early 20th centuries. Christianity is still the largest religion in Western Europe, according to a 2018 study by the Pew Research Center, 71.0% of the Western European population identified themselves as Christians. The East–West Schism, which has lasted since the 11th century, divided Christianity in Europe, the world, into Western Christianity and Eastern Christianity. With certain simplifications, Western Europe is thus Catholic or Protestant and uses the Latin alphabet. Eastern Europe uses the Greek alphabet or Cyrillic script.
According to this definition, Western Europe is formed by countries with dominant Roman Catholic and Protestant churches, including countries which are considered part of Central Europe now: Austria, Croatia, Czech Republic, Estonia, France, Hungary, Ireland, Latvia, Lithuania, Malta, Norway, Portugal, Slovenia, Sweden and United Kingdom. Eastern Europe, meanwhile is formed by countries with dominant Eastern Orthodox churches, including Greece, Bulgaria, Romania and Ukraine for instance; the schism is the break of communion and theology between what are now the Eastern and Western churches. This division dominated Europe for centuries, in opposition to the rather short-lived Cold War division of four decades. Since the Great Schism of 1054, Europe has been divided between Roman Catholic and Protestant churches in the West and the Eastern Orthodox Christian churches in the east. Due to this religious cleavage, Eastern Orthodox countries are associated with Eastern Europe. A cleavage of this sort is, however problematic.
During the four decades of the Cold War, the definition of East and West was rather simplified by the existence of the Eastern Bloc. Historians and social scientists view the Cold War definition of Western and Eastern Europe as outdated or relegating. During the final stages of World War II, the future of Europe was decided between the Allies in the 1945 Yalta Conference, between the British Prime Minister, Winston Churchill, the U. S. President, Franklin D. Roosevelt, the Premier of the Soviet Union, Joseph Stalin. Post-war Europe would be divided into two major spheres: the Western Bloc, influenced by the United States, the Eastern Bloc, influenced by the Soviet Union. With the onset of the Cold War, Europe was divided by the Iron Curtain; this term had been used during World War II by German Propaganda Minister Joseph Goebbels and Count Lutz Schwerin von Krosigk in the last days of the war.
Internationalization and localization
In computing, internationalization and localization are means of adapting computer software to different languages, regional peculiarities and technical requirements of a target locale. Internationalization is the process of designing a software application so that it can be adapted to various languages and regions without engineering changes. Localization is the process of adapting internationalized software for a specific region or language by translating text and adding locale-specific components. Localization uses the flexibility provided by internationalization; the terms are abbreviated to the numeronyms i18n and L10n for localization, due to the length of the words. Some companies, like IBM and Sun Microsystems, use the term globalization, g11n, for the combination of internationalization and localization. Known as "glocalization". Microsoft defines internationalization as a combination of world-readiness and localization. World-readiness is a developer task, which enables a product to be used with multiple scripts and cultures and separating user interface resources in a localizable format.
Hewlett-Packard and HP-UX created a system called "National Language Support" or "Native Language Support" to produce localizable software. According to Software without frontiers, the design aspects to consider when internationalizing a product are "data encoding and documentation, software construction, hardware device support, user interaction". Translation is the most time-consuming component of language localization; this may involve: For film and audio, translation of spoken words or music lyrics using either dubbing or subtitles Text translation for printed materials, digital media Potentially altering images and logos containing text to contain translations or generic icons Different translation length and differences in character sizes can cause layouts that work well in one language to work poorly in others Consideration of differences in dialect, register or variety Writing conventions like: Formatting of numbers Date and time format including use of different calendars Computer software can encounter differences above and beyond straightforward translation of words and phrases, because computer programs can generate content dynamically.
These differences may need to be taken into account by the internationalization process in preparation for translation. Many of these differences are so regular that a conversion between languages can be automated; the Common Locale Data Repository by Unicode provides a collection of such differences. Its data is used by major operating systems, including Microsoft Windows, macOS and Debian, by major Internet companies or projects such as Google and the Wikimedia Foundation. Examples of such differences include: Different "scripts" in different writing systems use different characters – a different set of letters, logograms, or symbols. Modern systems use the Unicode standard to represent many different languages with a single character encoding. Writing direction is left to right in most European languages, right-to-left in Hebrew and Arabic, or both in boustrophedon scripts, optionally vertical in some Asian languages. Complex text layout, for languages where characters change shape depending on context Capitalization exists in some scripts and not in others Different languages and writing systems have different text sorting rules Different languages have different numeral systems, which might need to be supported if Western Arabic numerals are not used Different languages have different pluralization rules, which can complicate programs that dynamically display numerical content.
Other grammar rules might vary, e.g. genitive. Different languages use different punctuation Keyboard shortcuts can only make use of buttons on the keyboard layout, being localized for. If a shortcut corresponds to a word in a particular language, it may need to be changed. Different countries have different economic conventions, including variations in: Paper sizes Broadcast television systems and popular storage media Telephone number formats Postal address formats, postal codes, choice of delivery services Currency – ISO 4217 codes are used for internationalization System of measurement Battery sizes Voltage and current standardsIn particular, the United States and Europe differ in most of these cases. Other areas follow one of these. Specific third-party services, such as online maps, weather reports, or payment service providers, might not be available worldwide from the same carriers, or at all. Time zones vary across the world, this must be taken into account if a product only interacted with people in a single time zone.
For internationalization, UTC is used internally and then
A Linux distribution is an operating system made from a software collection, based upon the Linux kernel and a package management system. Linux users obtain their operating system by downloading one of the Linux distributions, which are available for a wide variety of systems ranging from embedded devices and personal computers to powerful supercomputers. A typical Linux distribution comprises a Linux kernel, GNU tools and libraries, additional software, documentation, a window system, a window manager, a desktop environment. Most of the included software is free and open-source software made available both as compiled binaries and in source code form, allowing modifications to the original software. Linux distributions optionally include some proprietary software that may not be available in source code form, such as binary blobs required for some device drivers. A Linux distribution may be described as a particular assortment of application and utility software, packaged together with the Linux kernel in such a way that its capabilities meet the needs of many users.
The software is adapted to the distribution and packaged into software packages by the distribution's maintainers. The software packages are available online in so-called repositories, which are storage locations distributed around the world. Beside glue components, such as the distribution installers or the package management systems, there are only few packages that are written from the ground up by the maintainers of a Linux distribution. Six hundred Linux distributions exist, with close to five hundred out of those in active development; because of the huge availability of software, distributions have taken a wide variety of forms, including those suitable for use on desktops, laptops, mobile phones and tablets, as well as minimal environments for use in embedded systems. There are commercially backed distributions, such as Fedora, openSUSE and Ubuntu, community-driven distributions, such as Debian, Slackware and Arch Linux. Most distributions come ready to use and pre-compiled for a specific instruction set, while some distributions are distributed in source code form and compiled locally during installation.
Linus Torvalds developed the Linux kernel and distributed its first version, 0.01, in 1991. Linux was distributed as source code only, as a pair of downloadable floppy disk images – one bootable and containing the Linux kernel itself, the other with a set of GNU utilities and tools for setting up a file system. Since the installation procedure was complicated in the face of growing amounts of available software, distributions sprang up to simplify this. Early distributions included the following: H. J. Lu's "Boot-root", the aforementioned disk image pair with the kernel and the absolute minimal tools to get started, in late 1991 MCC Interim Linux, made available to the public for download in February 1992 Softlanding Linux System, released in 1992, was the most comprehensive distribution for a short time, including the X Window System Yggdrasil Linux/GNU/X, a commercial distribution first released in December 1992The two oldest and still active distribution projects started in 1993; the SLS distribution was not well maintained, so in July 1993 a new distribution, called Slackware and based on SLS, was released by Patrick Volkerding.
Dissatisfied with SLS, Ian Murdock set to create a free distribution by founding Debian, which had its first release in December 1993. Users were attracted to Linux distributions as alternatives to the DOS and Microsoft Windows operating systems on IBM PC compatible computers, Mac OS on the Apple Macintosh, proprietary versions of Unix. Most early adopters were familiar with Unix from school, they embraced Linux distributions for their low cost, availability of the source code for most or all of the software included. The distributions were a convenience, offering a free alternative to proprietary versions of Unix but they became the usual choice for Unix or Linux experts. To date, Linux has become more popular in server and embedded devices markets than in the desktop market. For example, Linux is used on over 50% of web servers, whereas its desktop market share is about 3.7%. Many Linux distributions provide an installation system akin to that provided with other modern operating systems. On the other hand, some distributions, including Gentoo Linux, provide only the binaries of a basic kernel, compilation tools, an installer.
Distributions are segmented into packages. Each package contains service. Examples of packages are a library for handling the PNG image format, a collection of fonts or a web browser; the package is provided as compiled code, with installation and removal of packages handled by a package management system rather than a simple file archiver. Each package intended for such a PMS contains meta-information such as a package description, "dependencies"; the package management system can evaluate this meta-information to allow package searches, to perform an automatic upgrade to a newer version, to check that all dependencies of a package are fulfilled, and/or to fulfill them automatically. Alth
An ideogram or ideograph is a graphic symbol that represents an idea or concept, independent of any particular language, specific words or phrases. Some ideograms are comprehensible only by familiarity with prior convention. In proto-writing, used for inventories and the like, physical objects are represented by stylized or conventionalized pictures, or pictograms. For example, the pictorial Dongba symbols without Geba annotation cannot represent the Naxi language, but are used as a mnemonic for reciting oral literature; some systems use ideograms, symbols denoting abstract concepts. The term "ideogram" is used to describe symbols of writing systems such as Egyptian hieroglyphs, Sumerian cuneiform and Chinese characters. However, these symbols are logograms, representing words or morphemes of a particular language rather than objects or concepts. In these writing systems, a variety of strategies were employed in the design of logographic symbols. Pictographic symbols depict the object referred to by the word, such as an icon of a bull denoting the Semitic word ʾālep "ox".
Some words denoting abstract concepts may be represented iconically, but most other words are represented using the rebus principle, borrowing a symbol for a similarly-sounding word. Systems used selected symbols to represent the sounds of the language, for example the adaptation of the logogram for ʾālep "ox" as the letter aleph representing the initial sound of the word, a glottal stop. Many signs in hieroglyphic as well as in cuneiform writing could be used either logographically or phonetically. For example, the Akkadian sign AN could be an ideograph for "deity", an ideogram for the god Anum in particular, a logograph for the Akkadian stem il- "deity", a logograph for the Akkadian word šamu "sky", or a syllabogram for either the syllable an or il. Although Chinese characters are logograms, two of the smaller classes in the traditional classification are ideographic in origin: Simple ideographs are abstract symbols such as 上 shàng "up" and 下 xià "down" or numerals such as 三 sān "three".
Semantic compounds are semantic combinations of characters, such as 明 míng "bright", composed of 日 rì "sun" and 月 yuè "moon", or 休 xiū "rest", composed of 人 rén "person" and 木 mù "tree". In the light of the modern understanding of Old Chinese phonology, researchers now believe that most of the characters classified as semantic compounds have an at least phonetic nature. An example of ideograms is the collection of 50 signs developed in the 1970s by the American Institute of Graphic Arts at the request of the US Department of Transportation; the system was used to mark airports and became more widespread. Mathematical symbols are a type of ideogram. Inspired by inaccurate early descriptions of Chinese and Japanese characters as ideograms, many Western thinkers have sought to design universal written languages, in which symbols denote concepts rather than words. An early proposal was An Essay towards a Real Character, a Philosophical Language by John Wilkins. A recent example is the system of Blissymbols, proposed by Charles K.
Bliss in 1949 and includes over 2,000 symbols. The Ideographic Myth Extract from DeFrancis' book. American Heritage Dictionary definition Merriam-Webster OnLine definition
Bi-directional text is text containing text in both text directionalities, both right-to-left and left-to-right. It involves text containing different types of alphabets, but may refer to boustrophedon, changing text directionality in each row; some writing systems of the world, including the Arabic and Hebrew scripts or derived systems such as the Persian and Yiddish scripts, are written in a form known as right-to-left, in which writing begins at the right-hand side of a page and concludes at the left-hand side. This is different from the left-to-right direction used by the dominant Latin script; when LTR text is mixed with RTL in the same paragraph, each type of text is written in its own direction, known as bi-directional text. This can get rather complex. Many computer programs fail to display bi-directional text correctly. For example, the Hebrew name Sarah is spelled: sin resh, heh. Note: Some web browsers may display the Hebrew text in this article in the opposite direction. Bidirectional script support is the capability of a computer system to display bi-directional text.
The term is shortened to "BiDi" or "bidi". Early computer installations were designed only to support a single writing system for left-to-right scripts based on the Latin alphabet only. Adding new character sets and character encodings enabled a number of other left-to-right scripts to be supported, but did not support right-to-left scripts such as Arabic or Hebrew, mixing the two was not practical. Right-to-left scripts were introduced through encodings like ISO/IEC 8859-6 and ISO/IEC 8859-8, storing the letters in writing and reading order, it is possible to flip the left-to-right display order to a right-to-left display order, but doing this sacrifices the ability to display left-to-right scripts. With bidirectional script support, it is possible to mix scripts from different scripts on the same page, regardless of writing direction. In particular, the Unicode standard provides foundations for complete BiDi support, with detailed rules as to how mixtures of left-to-right and right-to-left scripts are to be encoded and displayed.
The Unicode standard calls for characters to be ordered'logically', i.e. in the sequence they are intended to be interpreted, as opposed to'visually', the sequence they appear. This distinction is relevant for bidi support because at any bidi transition, the visual presentation ceases to be the'logical' one. Thus, in order to offer bidi support, Unicode prescribes an algorithm for how to convert the logical sequence of characters into the correct visual presentation. For this purpose, the Unicode encoding standard divides all its characters into one of four types:'strong','weak','neutral', and'explicit formatting'. Strong characters are those with definite directionality. Examples of this type of character include most alphabetic characters, syllabic characters, Han ideographs, non-European or non-Arabic digits, punctuation characters that are specific to only those scripts. Weak characters are those with vague directionality. Examples of this type of character include European digits, Eastern Arabic-Indic digits, arithmetic symbols, currency symbols.
Unless a directional override is present numbers are always encoded big-endian, the numerals rendered LTR. The weak directionality only applies to the placement of the number in its entirety. Neutral characters have directionality indeterminable without context. Examples include paragraph separators and most other whitespace characters. Punctuation symbols that are common to many scripts, such as the colon, full-stop, the no-break-space fall within this category. Explicit formatting characters referred to as "directional formatting characters", are special Unicode sequences that direct the unicode algorithm to modify its default behavior; these characters are subdivided into "marks", "embeddings", "isolates", "overrides". Their effects continue until the occurrence of a "pop" character. If a "weak" character is followed by another "weak" character, the algorithm will look at the first neighbouring "strong" character. Sometimes this leads to unintentional display errors; these errors are prevented with "pseudo-strong" characters.
Such Unicode control characters are called marks. The mark is to be inserted into a location to make an enclosed weak character inherit its writing direction. For example, to display the U+2122 ™ TRADE MARK SIGN for an English name brand in an Arabic passage, an LRM mark is inserted after the trademark symbol if the symbol is not followed by LTR text. If the LRM mark is not added, the weak character ™ will be neighbored by a strong LTR character and a strong RTL character. Hence, in an RTL context, it will be considered to be RTL, displayed in an incorrect order; the "embedding" directional formatting characters are the classical Unicode method of explicit formatting, as of Unicode 6.3, are being discouraged in favor of "isolates". An "embedding" signals; the text within the scope of the embedding formatting characters is not independent of the surrounding text. Characters within an embedding can affect the ordering of characters outside. Unicode 6.3 recognized that directional embeddings have too strong an effect on their surroundings and are thus unnece