In biology and genetics, a mutant is an organism or a new genetic character arising or resulting from an instance of mutation, an alteration of the DNA sequence of the genome or chromosome of an organism. The term mutant is applied to a virus with an alteration in its nucleotide sequence whose genome is RNA, rather than DNA. In multicellular eukaryotes, a DNA sequence may be altered in an individual somatic cell that gives rise to a mutant somatic cell lineage as happens in cancer progression. In eukaryotes, alteration of a mitochondrial or plastid DNA sequence may give rise to a mutant lineage, inherited separately from mutant genotypes in the nuclear genome; the natural occurrence of genetic mutations is integral to the process of evolution. The study of mutants is an integral part of biology. Mutants arise by mutations occurring in pre-existing genomes as a result of errors of DNA replication or errors of DNA repair. Errors of replication involve translesion synthesis by a DNA polymerse when it encounters and bypasses a damaged base in the template strand.
A DNA damage is an abnormal chemical structure in DNA, such as a strand break or an oxidized base, whereas a mutation, by contrast, is a change in the sequence of standard base pairs. Errors of repair occur; the DNA repair process microhomology-mediated end joining is error-prone. Although not all mutations have a noticeable phenotypic effect, the common usage of the word "mutant" is a pejorative term only used for genetically or phenotypically noticeable mutations. People used the word "sport" to refer to abnormal specimens; the scientific usage is broader. Mutants should not be confused with organisms born with developmental abnormalities, which are caused by errors during morphogenesis. In a developmental abnormality, the DNA of the organism is unchanged and the abnormality cannot be passed on to progeny. Conjoined twins are the result of developmental abnormalities. Chemicals that cause developmental abnormalities are called teratogens. Chemicals that induce mutations are called mutagens. Most mutagens are considered to be carcinogens.
Mutations are distinctly different from epigenetic alterations, although they share some common features. Both arise as a chromosomal alteration that can be replicated and passed on to subsequent cell generations. Both, when occurring within a gene, may silence expression of the gene. Whereas mutant cell lineages arise as a change in the sequence of standard bases, epigenetically altered cell lineages retain the sequence of standard bases but have gene sequences with changed levels of expression that can be passed down to subsequent cell generations. Epigenetic alterations include methylation of CpG islands of a gene promoter as well as specific chromatin histone modifications. Faulty repair of chromosomes at sites of DNA damage can give rise both to mutant cell lineages and/or epigenetically altered cell lineages. Evolution Genetic engineering Genetically modified organism Mutants of fiction Mutationism Synthetic lethality Synthetic viability Antennapedia mutant
Nuclear warfare is a military conflict or political strategy in which nuclear weaponry is used to inflict damage on the enemy. Nuclear weapons are weapons of mass destruction. A major nuclear exchange would have long-term effects from the fallout released, could lead to a "nuclear winter" that could last for decades, centuries, or millennia after the initial attack; some analysts dismiss the nuclear winter hypothesis, calculate that with nuclear weapon stockpiles at Cold War highs, although there would be billions of casualties, billions more rural people would survive. However, others have argued that secondary effects of a nuclear holocaust, such as nuclear famine and societal collapse, would cause every human on Earth to starve to death. So far, two nuclear weapons have been used in the course of warfare, both by the United States near the end of World War II. On August 6, 1945, a uranium gun-type device was detonated over the Japanese city of Hiroshima. Three days on August 9, a plutonium implosion-type device was detonated over the Japanese city of Nagasaki.
These two bombings resulted in the deaths of 120,000 people. After World War II, nuclear weapons were developed by the Soviet Union, the United Kingdom and the People's Republic of China, which contributed to the state of conflict and extreme tension that became known as the Cold War. In 1974, in 1998, two countries that were hostile toward each other, developed nuclear weapons. Israel and North Korea are thought to have developed stocks of nuclear weapons, though it is not known how many; the Israeli government has never admitted or denied to having nuclear weapons, although it is known to have constructed the reactor and reprocessing plant necessary for building nuclear weapons. South Africa manufactured several complete nuclear weapons in the 1980s, but subsequently became the first country to voluntarily destroy their domestically made weapons stocks and abandon further production. Nuclear weapons have been detonated on over 2,000 occasions for testing demonstrations. After the collapse of the Soviet Union in 1991 and the resultant end of the Cold War, the threat of a major nuclear war between the two nuclear superpowers was thought to have declined.
Since concern over nuclear weapons has shifted to the prevention of localized nuclear conflicts resulting from nuclear proliferation, the threat of nuclear terrorism. The possibility of using nuclear weapons in war is divided into two subgroups, each with different effects and fought with different types of nuclear armaments; the first, a limited nuclear war, refers to a small-scale use of nuclear weapons by two belligerents. A "limited nuclear war" could include targeting military facilities—either as an attempt to pre-emptively cripple the enemy's ability to attack as a defensive measure, or as a prelude to an invasion by conventional forces, as an offensive measure; this term could apply to any small-scale use of nuclear weapons that may involve military or civilian targets. The second, a full-scale nuclear war, could consist of large numbers of nuclear weapons used in an attack aimed at an entire country, including military and civilian targets; such an attack would certainly destroy the entire economic and military infrastructure of the target nation, would have a devastating effect on Earth's biosphere.
Some Cold War strategists such as Henry Kissinger argued that a limited nuclear war could be possible between two armed superpowers. Some predict, that a limited war could "escalate" into a full-scale nuclear war. Others have called limited nuclear war "global nuclear holocaust in slow motion", arguing that—once such a war took place—others would be sure to follow over a period of decades rendering the planet uninhabitable in the same way that a "full-scale nuclear war" between superpowers would, only taking a much longer path to the same result; the most optimistic predictions of the effects of a major nuclear exchange foresee the death of many millions of victims within a short period of time. More pessimistic predictions argue that a full-scale nuclear war could bring about the extinction of the human race, or at least its near extinction, with only a small number of survivors and a reduced quality of life and life expectancy for centuries afterward. However, such predictions, assuming total war with nuclear arsenals at Cold War highs, have not been without criticism.
Such a horrific catastrophe as global nuclear warfare would certainly cause permanent damage to most complex life on the planet, its ecosystems, the global climate. If predictions about the production of a nuclear winter are accurate, it would change the balance of global power, with countries such as Australia, New Zealand, China and Brazil predicted to become world superpowers if the Cold War led to a large-scale nuclear attack. A study presented at the annual meeting of the American Geophysical Union in December 2006 asserted that a small-scale regional nuclear war could produce as many direct fatalities as all of World War II and disrupt the global climate for a decade or more. In a regional nuclear conflict scenario in w
A low-budget film or low-budget movie is a motion picture shot with little to no funding from a major film studio or private investor. Many independent films are made on low budgets, but films made on the mainstream circuit with inexperienced or unknown filmmakers can have low budgets. Many young or first time filmmakers shoot low-budget films to prove their talent before doing bigger productions. Many low-budget films that do not gain some form of attention or acclaim are never released in theatres and are sent straight to retail because of its lack of marketability, story, or premise. There is no precise number to define a low budget production, it is relative to both genre and country. What might be a low-budget film in one country may be a big budget in another. Modern-day young filmmakers rely on film festivals for pre promotion, they use this to gain acclaim and attention for their films, which leads to a limited release in theatres. Film that acquire a cult following may be given a wide release.
Low-budget films can be amateur. They are either shot using professional or consumer equipment; some genres are more conducive to low-budget filmmaking than others. Horror films are a popular genre for low-budget directorial debuts. Jeremy Gardner, director of The Battery says that horror fans are more attracted to how the films affect them than seeing movie stars; this allows horror films to focus more on provoking a reaction than on expensive casting choices. Thriller films are a popular choice for low-budget films, as they focus on narrative. Science fiction films, which were once the domain of B movies require a big budget to accommodate their special effects, but low-cost do-it-yourself computer-generated imagery can make them affordable when they focus on story and characterization. Plot devices like shooting as found footage can lower production costs, scripts that rely on extended dialogue, such as Reservoir Dogs or Sex and Videotape, can entertain audiences without many sets; the money flow in filmmaking is a unique system because of the uncertainty of demand.
The makers of the film do not know. They may predict a film will do well and pay back the cost of production, but only get a portion back. Or the opposite may happen where a project that few think will go far can bring in more profit than imaginable. A big gambling variable, involved is the use of stars. Stars are brought on to a project to gain the film publicity and fame; this process can be profitable. Well-known actors may join a low-budget film for a portion of the gross. One of the most successful low-budget films was 1999, it had a budget of around US$60,000 but grossed $249 million worldwide. It spawned books, a trilogy of video games, a less-popular sequel. An more successful low-budget film was the 1972 film Deep Throat which cost only $22,500 to produce, yet was rumored to have grossed over $600 million, though this figure is disputed. Another early example of a successful low-budget film was the 1975 Bollywood Dacoit Western film Sholay, which cost ₹20 million to produce and grossed ₹3 billion, making it one of the highest-grossing films of all time in Indian cinema.
Other examples of successful low-budget Asian films include the Chinese films Enter the Dragon starring Bruce Lee, which had a budget of $850,000 and grossed $90 million worldwide. Wayne Wang's film Chan Is Missing, set on the streets of San Francisco's Chinatown, was made for $20,000 in 1982. San Francisco Chronicle columnist Herb Caen wrote that the budget would not have paid for the shoe laces in the film, "Annie". Rocky was shot on a budget of $1 million and grossed $225 million worldwide, making Sylvester Stallone a star. Halloween grossed $70 million worldwide. Napoleon Dynamite cost less than $400,000 to make but its gross revenue was $46 million. Divisions of major film studios that specialize in such films, such as Fox Searchlight Pictures and New Line Cinema, have made the distribution of low budget films competitive; the UK film Monsters is a recent successful example of bringing what was once considered the exclusive preserve of the big studios—the expensive, special effects blockbuster—to independent, low-budget cinema.
The film's budget was reported to be $500,000, but it grossed $4,188,738 at the box office. A considerable number of low- and modest-budget films have been forgotten by their makers and fallen into the public domain; this has been true of low-budget films made in the United States from 1923 to 1978. Examples include a number of films made by Roger Corman; some low-budget films have failed miserably at the box office and been forgotten, only to increase in popularity decades later. A number of cheaply made movies have attained cult-film status after being considered some of the worst features made for many years; the most famous examples of this later-day popularity of low-budget box-office failures include Plan 9 from Outer Space and Manos: The Hands of Fate. Additionally, some low-cost films that have had little success upon their initial release have been considered classics; the Last Man on Earth was the first adaptation of the novel. Due to budgetary constraints, the vampires in the film were zombie-like creatures instead of fast and agile monsters portrayed in the
A film called a movie, motion picture, moving picture, or photoplay, is a series of still images that, when shown on a screen, create the illusion of moving images. This optical illusion causes the audience to perceive continuous motion between separate objects viewed in rapid succession; the process of filmmaking is both an industry. A film is created by photographing actual scenes with a motion-picture camera, by photographing drawings or miniature models using traditional animation techniques, by means of CGI and computer animation, or by a combination of some or all of these techniques, other visual effects; the word "cinema", short for cinematography, is used to refer to filmmaking and the film industry, to the art of filmmaking itself. The contemporary definition of cinema is the art of simulating experiences to communicate ideas, perceptions, beauty or atmosphere by the means of recorded or programmed moving images along with other sensory stimulations. Films were recorded onto plastic film through a photochemical process and shown through a movie projector onto a large screen.
Contemporary films are now fully digital through the entire process of production and exhibition, while films recorded in a photochemical form traditionally included an analogous optical soundtrack. Films are cultural artifacts created by specific cultures, they reflect those cultures. Film is considered to be an important art form, a source of popular entertainment, a powerful medium for educating—or indoctrinating—citizens; the visual basis of film gives it a universal power of communication. Some films have become popular worldwide attractions through the use of dubbing or subtitles to translate the dialog into other languages; the individual images that make up a film are called frames. In the projection of traditional celluloid films, a rotating shutter causes intervals of darkness as each frame, in turn, is moved into position to be projected, but the viewer does not notice the interruptions because of an effect known as persistence of vision, whereby the eye retains a visual image for a fraction of a second after its source disappears.
The perception of motion is due to a psychological effect called the phi phenomenon. The name "film" originates from the fact that photographic film has been the medium for recording and displaying motion pictures. Many other terms exist for an individual motion-picture, including picture, picture show, moving picture and flick; the most common term in the United States is movie. Common terms for the field in general include the big screen, the silver screen, the movies, cinema. In early years, the word sheet was sometimes used instead of screen. Preceding film in origin by thousands of years, early plays and dances had elements common to film: scripts, costumes, direction, audiences and scores. Much terminology used in film theory and criticism apply, such as mise en scène. Owing to the lack of any technology for doing so, the moving images and sounds could not be recorded for replaying as with film; the magic lantern created by Christiaan Huygens in the 1650s, could be used to project animation, achieved by various types of mechanical slides.
Two glass slides, one with the stationary part of the picture and the other with the part, to move, would be placed one on top of the other and projected together the moving slide would be hand-operated, either directly or by means of a lever or other mechanism. Chromotrope slides, which produced eye-dazzling displays of continuously cycling abstract geometrical patterns and colors, were operated by means of a small crank and pulley wheel that rotated a glass disc. In the mid-19th century, inventions such as Joseph Plateau's phenakistoscope and the zoetrope demonstrated that a designed sequence of drawings, showing phases of the changing appearance of objects in motion, would appear to show the objects moving if they were displayed one after the other at a sufficiently rapid rate; these devices relied on the phenomenon of persistence of vision to make the display appear continuous though the observer's view was blocked as each drawing rotated into the location where its predecessor had just been glimpsed.
Each sequence was limited to a small number of drawings twelve, so it could only show endlessly repeating cyclical motions. By the late 1880s, the last major device of this type, the praxinoscope, had been elaborated into a form that employed a long coiled band containing hundreds of images painted on glass and used the elements of a magic lantern to project them onto a screen; the use of sequences of photographs in such devices was limited to a few experiments with subjects photographed in a series of poses because the available emulsions were not sensitive enough to allow the short exposures needed to photograph subjects that were moving. The sensitivity was improved and in the late 1870s, Eadweard Muybridge created the first animated image sequences photographed in real-time. A row of cameras was used, each, in turn, capturing one image on a photographic glass plate, so the total number of images in each sequence was limited by the number of cameras, about two dozen at most. Muybridge used his system to analyze the movements of a wi
A B movie or B film is a low-budget commercial motion picture, not an arthouse film. In its original usage, during the Golden Age of Hollywood, the term more identified films intended for distribution as the less-publicized bottom half of a double feature. Although the U. S. production of movies intended as second features ceased by the end of the 1950s, the term B movie continues to be used in its broader sense to this day. In its post-Golden Age usage, there is ambiguity on both sides of the definition: on the one hand, the primary interest of many inexpensive exploitation films is prurient. In either usage, most B movies represent a particular genre—the Western was a Golden Age B movie staple, while low-budget science-fiction and horror films became more popular in the 1950s. Early B movies were part of series in which the star played the same character. Always shorter than the top-billed films they were paired with, many had running times of 70 minutes or less; the term connoted a general perception that B movies were inferior to the more lavishly budgeted headliners.
Latter-day B movies still sometimes inspire multiple sequels. As the average running time of top-of-the-line films increased, so did that of B pictures. In its current usage, the term has somewhat contradictory connotations: it may signal an opinion that a certain movie is a genre film with minimal artistic ambitions or a lively, energetic film uninhibited by the constraints imposed on more expensive projects and unburdened by the conventions of putatively "serious" independent film; the term is now used loosely to refer to some higher-budgeted, mainstream films with exploitation-style content in genres traditionally associated with the B movie. From their beginnings to the present day, B movies have provided opportunities both for those coming up in the profession and others whose careers are waning. Celebrated filmmakers such as Anthony Mann and Jonathan Demme learned their craft in B movies, they are where actors such as John Wayne and Jack Nicholson first became established, they have provided work for former A movie actors, such as Vincent Price and Karen Black.
Some actors, such as Bela Lugosi, Eddie Constantine, Bruce Campbell and Pam Grier, worked in B movies for most of their careers. The term B actor is sometimes used to refer to a performer who finds work or in B pictures. In 1927–28, at the end of the silent era, the production cost of an average feature from a major Hollywood studio ranged from $190,000 at Fox to $275,000 at Metro-Goldwyn-Mayer; that average reflected both "specials" that might cost as much as $1 million and films made for around $50,000. These cheaper films allowed the studios to derive maximum value from facilities and contracted staff in between a studio's more important productions, while breaking in new personnel. Studios in the minor leagues of the industry, such as Columbia Pictures and Film Booking Offices of America, focused on those sorts of cheap productions, their movies, with short running times, targeted theaters that had to economize on rental and operating costs small-town and urban neighborhood venues, or "nabes".
Smaller production houses, known as Poverty Row studios, made films whose costs might run as low as $3,000, seeking a profit through whatever bookings they could pick up in the gaps left by the larger concerns. With the widespread arrival of sound film in American theaters in 1929, many independent exhibitors began dropping the then-dominant presentation model, which involved live acts and a broad variety of shorts before a single featured film. A new programming scheme developed that would soon become standard practice: a newsreel, a short and/or serial, a cartoon, followed by a double feature; the second feature, which screened before the main event, cost the exhibitor less per minute than the equivalent running time in shorts. The majors' "clearance" rules favoring their affiliated theaters prevented the independents' timely access to top-quality films; the additional movie gave the program "balance"—the practice of pairing different sorts of features suggested to potential customers that they could count on something of interest no matter what was on the bill.
The low-budget picture of the 1920s thus evolved into the second feature, the B movie, of Hollywood's Golden Age. The major studios, at first resistant to the double feature, soon adapted. All established B units to provide films for the expanding second-feature market. Block booking became standard practice: to get access to a studio's attractive A pictures, many theaters were obliged to rent the company's entire output for a season. With the B films rented at a flat fee, rates could be set guaranteeing the profitability of every B movie; the parallel practice of blind bidding freed the majors from worrying about their Bs' quality—even when booking in less than seasonal blocks, exhibitors had to buy most pictures sight unseen. The five largest studios—Metro-Goldwyn-Mayer, Paramount Pictures, Fox Film Corporation, Warner Bros. and RKO Radio Pictures —also belonged to companies with sizable theater chains, further securing the bottom line. Poverty Row studios, from modest outfits like Mascot Pictures, Tiffany Pictures, Sono Art-World Wide Pictures down to shoestring operations, made B movies, ot
In the context of present celebrity culture, an Internet celebrity, cyberstar, online celebrity, micro-celebrity or Internet personality, or influencer is someone who has become famous by means of the Internet. The advent of social media has helped people increase their outreach to a global audience; the Internet allows the masses to wrest control of fame from traditional media, creating micro-celebrities with the click of a mouse A micro-celebrity is the state of being famous to a niche group of people on a social media platform. Achieving micro-celebrity status involves the use of a self-presentation technique in which the subject views himself or herself as a public persona to be consumed by others. Persons who achieve this status use self-presentation to appeal to followers. Micro-celebrities are targeted by companies for advertising products to their fans and followers. Wanghong, or internet fame in Mandarin, is the Chinese rendition of internet stardom; the term is used to describe the Chinese digital economy based on influencer marketing in social media.
Wanghong has been predominantly used to generate profits via retail or eCommerce, by attracting the attention of celebrities' followers. According to CBN Data, a commercial data company affiliated with Alibaba, the Internet celebrities economy was set to be worth 58 billion yuan in 2016, more than China's total cinema box office revenue in 2015. There are two main business models in the Wanghong economy: Social Media Advertising, Online Retailing. In the online retailing business model, eCommerce-based Wanghong involves the use of social media platforms to sell self-branded products to potential buyers among followers via Chinese customer to customer C2C websites, such as TaoBao. Celebrities work as their own shops’ models by posting pictures or videos of themselves, wearing the clothes or accessories they sell, or giving distinctive makeup or fashion tips; the celebrities serve as key opinion leaders for their followers, who either aspire to be like them, or look up to them. Zhang Dayi, one of China's best known Wang Hong, with 4.9 million Sina weibo followers, has her online shop in a TaoBao website earning 300 million yuan per year.
This is comparable to the $21 million made by a top Chinese actress. In social media advertising, internet celebrities can be paid to advertise products; when celebrities have garnered sufficient attention and follower-ship, advertising companies approach them to help advertise products, which can reach a large user base. Censorship in Chinese media has created an entire social media ecosystem, that has become wildly successful in its own way. For every social media platform in the Western world, there is a Chinese version of it, the Chinese version can be successful. In China, the social media platforms used are different from those used in the West, but the results are the same - the platforms generate revenue; the greatest difference between Chinese Wanghong celebrities and their Western counterparts is that the profits generated by Chinese celebrities can be immense. Unlike YouTube, which takes a 45% of the commission on ads, one of the biggest social media platforms of China, is not involved in advertising, which allows internet celebrities to be more independent.
Monthly incomes can exceed 10 million RMB for those at the top. Millions of people write online weblogs. In many cases these contributions do not make the writers notable on a large scale, or only for people with the same specialist interest, but if the author has or develops a distinctive personality, the author may rise to fame derived from this, as much as from the content of the writer's blog. In some cases, people might rise to fame through a single video that goes viral; the Internet allows videos, news articles, jokes to spread quickly. Depending on the reach of the spread, the content may become considered an "Internet meme" and, any of the people associated may gain exposure for posting intelligent content. For example, Zach Anner, an Austin, Texas-based comedian, gained worldwide attention after submitting a video to Oprah Winfrey's "Search for the Next TV Star" competition. There is substantial searching online for people. Internet celebrities have become a popular phenomenon in China with the likes of Sister Furong, who received worldwide notoriety and fame for her unashamed efforts at self-promotion via Internet postings.
The concept of web celebrity ties into Andy Warhol's quip about 15 minutes of fame. A more recent adaptation of Warhol's quip prompted by the rise of online social networking and similar online phenomena, is the claim that "In the future, everyone will be famous to fifteen people" or, in some renditions, "On the Web, everyone will be famous to fifteen people"; this quote, though attributed to David Weinberger, was said to have originated with the Scottish artist Momus. Social media personalities function as lifestyle gurus who present a particular lifestyle or attitude to their spectators. In this role they may be crucial influencers / multipliers for trends in the fashion industry, variously becoming popular as fashion bloggers or fashion designers. Meetups are a way Internet celebrities engage in to meet and interact with fans. An Internet celebrity has naively invited fans to meet him/her at a certain place and time, without proper organization, attracting crowds of fans, causing disorderly and unsafe situations.
Tanacon is an example of an organization involving a group of internet celebrities that were set to meet paying fans but did not follow through. Because of the unorderly setup, the meetup resulted in chaos. Alternatively it can be
Harem known as zenana in the Indian subcontinent, properly refers to domestic spaces that are reserved for the women of the house in a Muslim family. This private space has been traditionally understood as serving the purposes of maintaining the modesty and protection of women. A harem may house a man's wife — or wives and concubines, as in royal harems of the past — their pre-pubescent male children, unmarried daughters, female domestic workers, other unmarried female relatives. In former times some harems were guarded by eunuchs; the structure of the harem and the extent of monogamy or polygamy has varied depending on the family's personalities, socio-economic status, local customs. Similar institutions have been common in other Mediterranean and Middle Eastern civilizations among royal and upper-class families, the term is sometimes used in other contexts. Although the institution has experienced a sharp decline in the modern era due to a rise in education and economic opportunities for women, as well as Western influences, seclusion of women is still practiced in some parts of the world, such as rural Afghanistan and conservative states of the Gulf region.
In the West, Orientalist imaginary conceptions of the harem as a hidden world of sexual subjugation where numerous women lounged in suggestive poses have influenced many paintings, stage productions and literary works. Some earlier European Renaissance paintings dating to the 16th century portray the women of the Ottoman harem as individuals of status and political significance. In many periods of Islamic history, women in the harem exercised various degrees of political power, such as the Sultanate of Women in the Ottoman Empire; the word has been recorded in the English language since early 17th century. It comes from the Arabic ḥarīm, which can mean "a sacred inviolable place", "harem" or "female members of the family". In English the term harem can mean "the wives of a polygamous man." The triliteral Ḥ-R-M appears in other terms related the notion of interdiction such as haram, ihram and al-Ḥaram al-Šarīf. In Turkish of the Ottoman era, the harem, i.e. the part of the house reserved for women was called haremlık, while the space open for men was known as selamlık.
The practice of female seclusion is not exclusive to Islam, but the English word harem denotes the domestic space reserved for women in Muslim households. Some scholars have used the term to refer to polygynous royal households throughout history. Leila Ahmed describes the ideal of seclusion as a "a man's right to keep his women concealed—invisible to other men." Ahmed identifies the practice of seclusion as a social ideal and one of the major factors that shaped the lives of women in the Mediterranean Middle East. For example, contemporary sources from the Byzantine Empire describe the social mores that governed women's lives. Women were not supposed to be seen in public, they were guarded by eunuchs and could only leave the home "veiled and suitably chaperoned." Some of these customs were borrowed from the Persians, but Greek society influenced the development of patriarchal tradition. The ideal of seclusion was not realized as social reality; this was in part because working class women held jobs that required interaction with men.
In the Byzantine empire, the ideal of gender segregation created economic opportunities for women as midwives, bath attendants and artisans, since it was considered inappropriate for men to attend to women's needs. At times women engaged in other commercial activities. Historical records shows that the women of 14th-century Mamluk Cairo visited public events alongside men, despite objections of religious scholars; the practice of gender segregation in Islam was influenced by an interplay of religion and politics. Female seclusion has signaled social and economic prestige; the norms of female seclusion spread beyond the elites, but the practice remained characteristic of upper and middle classes, for whom the financial ability to allow one's wife to remain at home was a mark of high status. In some regions, such as the Arabian peninsula, seclusion of women was practiced by poor families at the cost of great hardship, but it was economically unrealistic for the lower classes. Where historical evidence is available, it indicates that the harem was much more to be monogamous.
For example, in late Ottoman Istanbul, only 2.29 percent of married men were polygynous, with the average number of wives being 2.08. In some regions, like Sub-Saharan Africa and Southeast Asia, prevalence of women in agricultural work leads to wider practice of polygyny, but makes seclusion impractical. In contrast, in Eurasian and North African rural communities that rely on male-dominated plough farming, seclusion is economically possible but polygyny is undesirable; this indicates that the fundamental characteristic of the harem is seclusion of women rather than polygyny. The idea of the harem or seclusion of women did not originate with Islam; the practice of secluding women was common to many Ancient Near East communities where polygamy was permitted. In pre-Islamic Assyria and Egypt, most royal courts had a harem, where the ruler’s wives and concubines lived with female attendants, eunuchs. Encyclopædia Iranica uses the term harem to describe the practices of the ancient Near East. In Assyria, rules of harem etiquette were stipulated by