European Investment Bank
The European Investment Bank is the European Union's nonprofit long-term lending institution established in 1958 under the Treaty of Rome. As a "policy-driven bank" whose shareholders are the member states of the EU, the EIB uses its financing operations to bring about European integration and social cohesion, it should not be confused with the European Central Bank, based in Frankfurt or with the European Bank for Reconstruction and Development, based in London. The EIB is a publicly owned international financial institution and its shareholders are the EU member states, thus the member states set the bank's broad policy goals and oversee the two independent decision-making bodies—the board of governors and the board of directors. It is the world's largest international public lending institution; the European Investment Bank was founded in Brussels in 1958 when the Treaty of Rome came into force. It relocated to Luxembourg, its current headquarters, in 1968. By 1999, it had more than 1,000 staff members, a figure that had nearly doubled by 2012.
The EIB Group was formed in 2000, comprising the EIB and the European Investment Fund, the EU's venture capital arm that provides finances and guarantees for small and medium enterprises. The EIB is the EIF's majority shareholder, with 62% of the shares. In 2012, the EIB Institute was created, with the goal of promoting "European initiatives for the common good" in EU Member States and candidate and potential candidate countries, as well as EFTA nations; the total subscribed capital of the Bank was EUR 232 billion in 2012. The capital of the EIB was doubled between 2007 and 2009 in response to the crisis; the EU heads of government agreed to increase paid-in capital by EUR 10 billion in June 2012, with implementation expected in early 2013. For the fiscal year 2011, EIB lent EUR 61 billion in various loan products, bringing total outstanding loans to EUR 395 billion. Nearly 90% of these were with EU member states with the remainder dispersed between around 150 "partner countries"; the bank uses its AAA credit rating and funds itself by raising equivalent amounts on the capital markets.
As the "Bank of the European Union", the EIB's mission is to make a difference to the future of Europe and its partners by supporting sound investments which further EU policy goals. Although about 90 percent of projects financed by the EIB are based in EU member countries, the bank does fund projects in about 150 other countries—non-EU Southeastern European countries, Mediterranean partner countries, ACP countries and Latin American countries, the members of the Eastern Partnership and Russia. According to the EIB, it works in these countries to implement the financial pillar of the union's external cooperation and development policies by encouraging private sector development, infrastructure development, security of energy supply and environmental sustainability. In the wake of the 2014 Russian military intervention in Ukraine, the Council of the European Union instructed the EIB to suspend the signature of new financing operations in Russia. Operating strategy: To finance viable capital projects which further EU objectives To borrow on the capital markets to finance these projectsLending strategy within the EU Within the EU the EIB has six priority objectives: Cohesion and convergence Support for small and medium-sized enterprises Environmental sustainability Knowledge economy Development of Trans-European Networks of transport and energy Sustainable and secure energy supplyLending strategy outside the EU Outside the EU the EIB's priority objectives for lending activity are: Private sector development Financial sector development Infrastructure development Security of energy supply Environmental sustainability EU presenceWhen making loans outside the EU, the bank has lending mandates based on EU external cooperation and development policies, which differ from region to region.
Pre-Accession: Candidate and Potential Candidate countries in the Enlargement region European Neighbourhood: Mediterranean Neighbourhood / Russia and Eastern Neighbours Development: Africa and Caribbean / Republic of South Africa Economic Cooperation: Asia and Latin America Within pre-accession countries, activities support both the EU priority lending objectives and the objectives of the external mandates. Transport Policy, Energy Policy, Transparency Policy, Climate Strategy, Governance at the EIB, Complaints Mechanism Policy, Anti-Fraud and Anti-Corruption Policy, Integrity Policy and Compliance Charter, Statement on Environmental and Social Principles and Standards, EIB Whistleblowing Policy, EIB Policy towards weakly regulated, non-transparent and uncooperative jurisdictions; as such, the European Investment Bank is represented in the Standard Committee of SuRe® – The Standard for Sustainable and Resilient Infrastructure, a global voluntary standard, developed by the Swiss Global Infrastructure Basel Foundation and the French bank Natixis, which integrates key criteria of sustainability and resilience into infrastructure development and upgrade.
The EIB is governed by the: Board of Governors – the Finance Ministers of the Member States. Their mandate is to authorise EIB activities outside the Union, form credit policy guidelines and approve the annual accounts. Board of directors – twenty-eight members. There are eighteen alt
Greece national football team
The Greece national football team represents Greece in association football and is controlled by the Hellenic Football Federation, the governing body for football in Greece. Greece's main home grounds are located in the capital-city Athens at the Olympic Stadium in Maroussi and in the port of Piraeus at the Karaiskakis Stadium. Greece is one of only ten national teams to have been crowned UEFA European Champions. At the UEFA Euro 1980 Greece made their first appearance in a major tournament and although they did not make it through the group stage, their qualification to the eight-team tournament gave them a position in the top eight European football nations that year. Greece had to wait until 1994 to experience their first FIFA World Cup participation, but after an undefeated qualifying run they produced a poor performance in the final tournament, losing all three group matches without scoring; the UEFA Euro 2004 marked a high point in Greece's football history when they were crowned European champions, in only their second participation in the tournament, against all the odds.
The Greeks, dismissed as rank outsiders before the tournament, defeated some of the favourites in the competition including hosts Portugal and defending European champions France, with Greece beating the former in both the opening game of the tournament and again in the final. Their triumph gave them a qualification for the 2005 FIFA Confederations Cup. In the decade after the 2004 victory, Greece qualified for the final tournaments of all but one major competitions entered, reaching the quarter-finals at the UEFA Euro 2012 and the round of 16 at the 2014 FIFA World Cup. Moreover, they occupied a place in the top 20 of the FIFA World Rankings for all but four months during that period, reached an all-time high of eighth in the world from April to June 2008, as well as in October 2011; the first appearance of a Greek national football team was at the 1906 Intercalated Games in Athens. The Greek team participated in the Inter-Allied Games in Paris, following the end of World War I, in the 1920 Summer Olympics of Antwerp.
A notable figure during these years was Giorgos Kalafatis and manager of the team. During the next decades, the Greek team did not manage to have any success, despite the passion of the Greek people for football; the country's economical and social problems and after World War II, did not allow successful preparation of the national team. At its best moment, Greece narrowly missed qualifying for the 1970 FIFA World Cup, despite a good quality team, including some of the greatest-ever Greek players, such as Mimis Domazos, Giorgos Sideris, Giorgos Koudas and Mimis Papaioannou. Greece, under the guidance of Alketas Panagoulias, made its first appearance in a major tournament at the Euro 1980 in Italy, after qualifying top of a group that included the Soviet Union and Hungary, both world football powers. In the final tournament, Greece was drawn into group A with West Germany, the Netherlands, Czechoslovakia. In their first game, Greece held the Dutch until the only goal of the game was scored with a penalty kick by Kist, in the 65th minute.
Three days Greece played Czechoslovakia in Rome. After holding the Czechoslovakians 1–1 at the end the first half, Greece lost 3–1. In their last game, Greece earned a 0–0 draw against eventual winners West Germany, concluding what was considered a decent overall performance in the team's maiden presence in a final phase of any football competition; the team's success in qualifying for the 1994 FIFA World Cup in the United States, marked the first time they had made it to the FIFA World Cup finals. Greece finished undefeated in their qualifying group, surpassing Russia in the final game. In the final tournament Greece were drawn into Group D with Nigeria and Argentina. After the successful qualifying campaign, expectations back in Greece were high as no one could imagine the oncoming astounding failure. Most notable reason for this complete failure was the fact that legendary coach Alketas Panagoulias opted to take a squad full of those players – though most of them aging and out of form – that helped the team in the qualifying instead of new emerging talents seeing it as a reward for their unprecedented success.
Furthermore, they had the disadvantage of being drawn into a "group of death", with runners-up at the 1990 FIFA World Cup Argentina semifinalists Bulgaria, Nigeria, one of the strongest African teams. It is worth mentioning that all players of the squad, including the three goalkeepers, took part in those three games, something rare; this tournament was humiliating for the Greek squad. In their first game against Argentina at Foxboro Stadium just outside Boston, they lost 4–0. Four days Greece suffered another 4–0 blow from Bulgaria at Soldier Field in Chicago, in what would be their final game, they lost to Nigeria 2–0 at Foxboro Stadium again. In the end, Greece were eliminated in the first round by losing all three games, scoring no goals and conceding ten. Greece failed to qualify for the Euro 1996 finishing third in the group behind Scotland. In their 1998 World Cup qualifying tournament the team finished only one point shy of second-placed Croatia after a 0–0 draw by the eventual Group winners, the Danish.
In their Euro 2000 qualifying group, Greece finished again in third place, two points behind second-placed Slovenia in a disappointing campaign that saw the team lose at home to Latvia. In the 2002 World Cup qualifying Greece finished a disappointing fourth in their group behind England and Finland, which led to the sacking of coach Vasilis Daniil, replaced by Otto Rehhagel. Highlights of the campaign included a 5–1 de
Annual average daily traffic
Annual average daily traffic, abbreviated AADT, is a measure used in transportation planning, transportation engineering and retail location selection. Traditionally, it is the total volume of vehicle traffic of a highway or road for a year divided by 365 days. AADT is a useful, measurement of how busy the road is. Newer advances from GPS traffic data providers are now providing AADT counts by side of the road, by day of week and by time of day. One of the most important uses of AADT is for determining funding for the maintenance and improvement of highways. In the United States the amount of federal funding a state will receive is related to the total traffic measured across its highway network; each year on June 15, every state in the United States submits a Highway Performance Monitoring System HPMS report. The HPMS report contains various information regarding the road segments in the state based on a sample of the road segments. In the report, the AADT is converted to vehicle miles traveled.
VMT is the AADT multiplied by the length of the road segment. To determine the amount of traffic a state has, the AADT cannot be summed for all road segments since an AADT is a rate; the VMT is summed and is used as an indicator of the amount of traffic a state has. For federal-funding, formulas are applied to include the VMT and other highway statistics. In the United Kingdom AADT is one of a number of measures of traffic used by local highway authorities, Highways England and the Department for Transport to forecast maintenance needs and expenditure. To measure AADT on individual road segments, traffic data is collected by an automated traffic counter, hiring an observer to record traffic or licensing estimated counts from GPS data providers. There are two different techniques of measuring the AADTs for road segments with automated traffic counters. One technique is called continuous count data collection method; this method includes sensors that are permanently embedded into a road and traffic data is measured for the entire 365 days.
The AADT is the sum of the total traffic for the entire year divided by 365 days. There can be problems with calculating the AADT with this method. For example, if the continuous count equipment is not operating for the full 365 days due to maintenance or repair; because of this issue, seasonal or day-of-week biases might skew the calculated AADT. In 1992, AASHTO released the AASHTO Guidelines for Traffic Data Programs, which identified a way to produce an AADT without seasonal or day-of-week biases by creating an "average of averages." For every month and day-of-week, a Monthly Average Day of Week is calculated. Each day-of-week's MADW is calculated across months to calculate an Annual Average Day of Week; the AADWs are averaged to calculate an AADT. The United States Federal Highway Administration has adopted this method as the preferred method in the. While providing the most accurate AADT, installing and maintaining continuous count stations method is costly. Most public agencies are only able to monitor a small percentage of the roadway using this method.
Most AADTs are generated using short-term data collection methods sometimes known as the coverage count data collection method. Traffic is collected with portable sensors that are attached to the road and record traffic data for 2 – 14 days; these are pneumatic road tubes although other more expensive technology such as radar, laser, or sonar exist. After recording the traffic data, the traffic counts on the same road segment are taken again in another three years. FHWA Traffic Monitoring Guide recommends performing a short count on a road segment at a minimum of every three years. There are many methods used to calculate an AADT from a short-term count, but most methods attempt to remove seasonal and day-of-week biases during the collection period by applying factors created from associated continuous counters. Short counts are taken either by local government, or contractors. For the years when a traffic count is not recorded, the AADT is estimated by applying a factor called the Growth Factor.
Growth Factors are statistically determined from historical data of the road segment. If there is no historical data, Growth Factors from similar road segments are used. Annual average weekday traffic only includes Monday to Friday data. Public holidays are excluded from the AAWT calculation. Average summer daily traffic is a similar measure to the annual average daily traffic. Data collecting methods of the two are the same, however the ASDT data is collected during summer only; the measure is useful in areas where there are significant seasonal traffic volumes carried by a given road. Average daily traffic or ADT, sometimes mean daily traffic, is the average number of vehicles two-way passing a specific point in a 24-hour period measured throughout a year. ADT is not as referred to as the engineering standard of AADT, the standard measurement for vehicle traffic load on a section of road, the basis for most decisions regarding transport planning, or to the environmental hazards of pollution related to road transport.
The 1992 Edition of the AASHTO Guidelines is out of date. The current edition is from 2018; the Gary Davis article was published in Transportation Research Record 1593, 1997. The date shown in the article is the date of an on-line posting. Florida New York State - Traffic Data Viewer - interactive map program graphically displays traffic data Oklahoma Virginia FHWA Traffic Monitoring Guide New Zealand State Highway AADTs Louisiana AADTs
A tsunami or tidal wave known as a seismic sea wave, is a series of waves in a water body caused by the displacement of a large volume of water in an ocean or a large lake. Earthquakes, volcanic eruptions and other underwater explosions above or below water all have the potential to generate a tsunami. Unlike normal ocean waves, which are generated by wind, or tides, which are generated by the gravitational pull of the Moon and the Sun, a tsunami is generated by the displacement of water. Tsunami waves do not resemble normal undersea currents or sea waves because their wavelength is far longer. Rather than appearing as a breaking wave, a tsunami may instead resemble a rising tide. For this reason, it is referred to as a "tidal wave", although this usage is not favoured by the scientific community because it might give the false impression of a causal relationship between tides and tsunamis. Tsunamis consist of a series of waves, with periods ranging from minutes to hours, arriving in a so-called "internal wave train".
Wave heights of tens of metres can be generated by large events. Although the impact of tsunamis is limited to coastal areas, their destructive power can be enormous, they can affect entire ocean basins; the 2004 Indian Ocean tsunami was among the deadliest natural disasters in human history, with at least 230,000 people killed or missing in 14 countries bordering the Indian Ocean. The Ancient Greek historian Thucydides suggested in his 5th century BC History of the Peloponnesian War that tsunamis were related to submarine earthquakes, but the understanding of tsunamis remained slim until the 20th century and much remains unknown. Major areas of current research include determining why some large earthquakes do not generate tsunamis while other smaller ones do; the term "tsunami" is a borrowing from the Japanese tsunami 津波, meaning "harbour wave". For the plural, one can either follow ordinary English practice and add an s, or use an invariable plural as in the Japanese; some English speakers alter the word's initial /ts/ to an /s/ by dropping the "t", since English does not natively permit /ts/ at the beginning of words, though the original Japanese pronunciation is /ts/.
Tsunamis are sometimes referred to as tidal waves. This once-popular term derives from the most common appearance of a tsunami, that of an extraordinarily high tidal bore. Tsunamis and tides both produce waves of water that move inland, but in the case of a tsunami, the inland movement of water may be much greater, giving the impression of an high and forceful tide. In recent years, the term "tidal wave" has fallen out of favour in the scientific community, because the causes of tsunamis have nothing to do with those of tides, which are produced by the gravitational pull of the moon and sun rather than the displacement of water. Although the meanings of "tidal" include "resembling" or "having the form or character of" the tides, use of the term tidal wave is discouraged by geologists and oceanographers. A 1969 episode of Hawaii Five-O entitled "Forty Feet High And It Kills!" used the terms "tsunami" and "tidal wave" interchangeably. The term seismic sea wave is used to refer to the phenomenon, because the waves most are generated by seismic activity such as earthquakes.
Prior to the rise of the use of the term tsunami in English, scientists encouraged the use of the term seismic sea wave rather than tidal wave. However, like tsunami, seismic sea wave is not a accurate term, as forces other than earthquakes – including underwater landslides, volcanic eruptions, underwater explosions, land or ice slumping into the ocean, meteorite impacts, the weather when the atmospheric pressure changes rapidly – can generate such waves by displacing water. While Japan may have the longest recorded history of tsunamis, the sheer destruction caused by the 2004 Indian Ocean earthquake and tsunami event mark it as the most devastating of its kind in modern times, killing around 230,000 people; the Sumatran region is accustomed to tsunamis, with earthquakes of varying magnitudes occurring off the coast of the island. Tsunamis are an underestimated hazard in the Mediterranean Sea and parts of Europe. Of historical and current importance are the 1755 Lisbon earthquake and tsunami, the 1783 Calabrian earthquakes, each causing several tens of thousands of deaths and the 1908 Messina earthquake and tsunami.
The tsunami claimed more than 123,000 lives in Sicily and Calabria and is among the most deadly natural disasters in modern Europe. The Storegga Slide in the Norwegian Sea and some examples of tsunamis affecting the British Isles refer to landslide and meteotsunamis predominantly and less to earthquake-induced waves; as early as 426 BC the Greek historian Thucydides inquired in his book History of the Peloponnesian War about the causes of tsunami, was the first to argue that ocean earthquakes must be the cause. The cause, in my opinion, of this phenomenon must be sought in the earthquake. At the point where its shock has been the most violent the sea is driven back, recoiling with redoubled force, causes the inundation. Without an earthquake I do not see; the Roman historian Ammianus Marcellinus described the typical sequence of a tsunami, including an incipient earthquake, the sudden retreat of the sea and a followin
Plate tectonics is a scientific theory describing the large-scale motion of seven large plates and the movements of a larger number of smaller plates of the Earth's lithosphere, since tectonic processes began on Earth between 3 and 3.5 billion years ago. The model builds on the concept of continental drift, an idea developed during the first decades of the 20th century; the geoscientific community accepted plate-tectonic theory after seafloor spreading was validated in the late 1950s and early 1960s. The lithosphere, the rigid outermost shell of a planet, is broken into tectonic plates; the Earth's lithosphere is composed of many minor plates. Where the plates meet, their relative motion determines the type of boundary: convergent, divergent, or transform. Earthquakes, volcanic activity, mountain-building, oceanic trench formation occur along these plate boundaries; the relative movement of the plates ranges from zero to 100 mm annually. Tectonic plates are composed of oceanic lithosphere and thicker continental lithosphere, each topped by its own kind of crust.
Along convergent boundaries, subduction, or one plate moving under another, carries the lower one down into the mantle. In this way, the total surface of the lithosphere remains the same; this prediction of plate tectonics is referred to as the conveyor belt principle. Earlier theories, since disproven, proposed gradual expansion of the globe. Tectonic plates are able to move because the Earth's lithosphere has greater mechanical strength than the underlying asthenosphere. Lateral density variations in the mantle result in convection. Plate movement is thought to be driven by a combination of the motion of the seafloor away from spreading ridges due to variations in topography and density changes in the crust. At subduction zones the cold, dense crust is "pulled" or sinks down into the mantle over the downward convecting limb of a mantle cell. Another explanation lies in the different forces generated by tidal forces of the Moon; the relative importance of each of these factors and their relationship to each other is unclear, still the subject of much debate.
The outer layers of the Earth are divided into the asthenosphere. The division is based on differences in mechanical properties and in the method for the transfer of heat; the lithosphere is more rigid, while the asthenosphere is hotter and flows more easily. In terms of heat transfer, the lithosphere loses heat by conduction, whereas the asthenosphere transfers heat by convection and has a nearly adiabatic temperature gradient; this division should not be confused with the chemical subdivision of these same layers into the mantle and the crust: a given piece of mantle may be part of the lithosphere or the asthenosphere at different times depending on its temperature and pressure. The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, which ride on the fluid-like asthenosphere. Plate motions range up to a typical 10–40 mm/year, to about 160 mm/year; the driving mechanism behind this movement is described below. Tectonic lithosphere plates consist of lithospheric mantle overlain by one or two types of crustal material: oceanic crust and continental crust.
Average oceanic lithosphere is 100 km thick. Because it is formed at mid-ocean ridges and spreads outwards, its thickness is therefore a function of its distance from the mid-ocean ridge where it was formed. For a typical distance that oceanic lithosphere must travel before being subducted, the thickness varies from about 6 km thick at mid-ocean ridges to greater than 100 km at subduction zones. Continental lithosphere is about 200 km thick, though this varies between basins, mountain ranges, stable cratonic interiors of continents; the location where two plates meet is called a plate boundary. Plate boundaries are associated with geological events such as earthquakes and the creation of topographic features such as mountains, mid-ocean ridges, oceanic trenches; the majority of the world's active volcanoes occur along plate boundaries, with the Pacific Plate's Ring of Fire being the most active and known today. These boundaries are discussed in further detail below; some volcanoes occur in the interiors of plates, these have been variously attributed to internal plate deformation and to mantle plumes.
As explained above, tectonic plates may include continental crust or oceanic crust, most plates contain both. For example, the African Plate includes the continent and parts of the floor of the Atlantic and Indian Oceans; the distinction between oceanic crust and continental crust is based on their modes of formation. Oceanic crust is fo
Lightning is a violent and sudden electrostatic discharge where two electrically charged regions in the atmosphere temporarily equalize themselves during a thunderstorm. Lightning creates a wide range of electromagnetic radiations from the hot plasma created by the electron flow, including visible light in the form of black-body radiation. Thunder is the sound formed by the shock wave formed as gaseous molecules experience a rapid pressure increase; the three main kinds of lightning are: created either inside one thundercloud, or between two clouds, or between a cloud and the ground. The 15 recognized observational variants include "heat lightning", seen but not heard, dry lightning, which causes many forest fires, ball lightning, observed scientifically. Humans have deified lightning for millennia, lightning inspired expressions like "Bolt from the blue", "Lightning never strikes twice", "blitzkrieg" are common. In some languages, "Love at first sight" translates as "lightning strike"; the details of the charging process are still being studied by scientists, but there is general agreement on some of the basic concepts of thunderstorm electrification.
The main charging area in a thunderstorm occurs in the central part of the storm where air is moving upward and temperatures range from −15 to −25 °C, see figure to the right. At that place, the combination of temperature and rapid upward air movement produces a mixture of super-cooled cloud droplets, small ice crystals, graupel; the updraft carries the super-cooled cloud droplets and small ice crystals upward. At the same time, the graupel, larger and denser, tends to fall or be suspended in the rising air; the differences in the movement of the precipitation cause collisions to occur. When the rising ice crystals collide with graupel, the ice crystals become positively charged and the graupel becomes negatively charged. See figure to the left; the updraft carries. The larger and denser graupel is either suspended in the middle of the thunderstorm cloud or falls toward the lower part of the storm; the result is that the upper part of the thunderstorm cloud becomes positively charged while the middle to lower part of the thunderstorm cloud becomes negatively charged.
The upward motions within the storm and winds at higher levels in the atmosphere tend to cause the small ice crystals in the upper part of the thunderstorm cloud to spread out horizontally some distance from thunderstorm cloud base. This part of the thunderstorm cloud is called the anvil. While this is the main charging process for the thunderstorm cloud, some of these charges can be redistributed by air movements within the storm. In addition, there is a small but important positive charge buildup near the bottom of the thunderstorm cloud due to the precipitation and warmer temperatures. A typical cloud-to-ground lightning flash culminates in the formation of an electrically conducting plasma channel through the air in excess of 5 km tall, from within the cloud to the ground's surface; the actual discharge is the final stage of a complex process. At its peak, a typical thunderstorm produces three or more strikes to the Earth per minute. Lightning occurs when warm air is mixed with colder air masses, resulting in atmospheric disturbances necessary for polarizing the atmosphere.
However, it can occur during dust storms, forest fires, volcanic eruptions, in the cold of winter, where the lightning is known as thundersnow. Hurricanes generate some lightning in the rainbands as much as 160 km from the center; the science of lightning is called fulminology, the fear of lightning is called astraphobia. Lightning is not distributed evenly around the planet. On Earth, the lightning frequency is 44 times per second, or nearly 1.4 billion flashes per year and the average duration is 0.2 seconds made up from a number of much shorter flashes of around 60 to 70 microseconds. Many factors affect the frequency, distribution and physical properties of a typical lightning flash in a particular region of the world; these factors include ground elevation, prevailing wind currents, relative humidity, proximity to warm and cold bodies of water, etc. To a certain degree, the ratio between IC, CC and CG lightning may vary by season in middle latitudes; because human beings are terrestrial and most of their possessions are on the Earth where lightning can damage or destroy them, CG lightning is the most studied and best understood of the three types though IC and CC are more common types of lightning.
Lightning's relative unpredictability limits a complete explanation of how or why it occurs after hundreds of years of scientific investigation. About 70 % of lightning occurs over land in the tropics; this occurs from both the mixture of warmer and colder air masses, as well as differences in moisture concentrations, it happens at the boundaries between them. The flow of warm ocean currents past drier land masses, such as the Gulf Stream explains the elevated frequency of lightning in the Southeast United States; because the influence of small or absent land masses in the vast stretches of the world's oceans limits the differences between these variants in the atmosphere, lightning is notably less frequent there than over larger landforms. The North and South Poles are limited in their coverage of thunderstorms and theref
An architect is a person who plans and reviews the construction of buildings. To practice architecture means to provide services in connection with the design of buildings and the space within the site surrounding the buildings that have human occupancy or use as their principal purpose. Etymologically, architect derives from the Latin architectus, which derives from the Greek, i.e. chief builder. Professionally, an architect's decisions affect public safety, thus an architect must undergo specialized training consisting of advanced education and a practicum for practical experience to earn a license to practice architecture. Practical and academic requirements for becoming an architect vary by jurisdiction. Throughout ancient and medieval history, most of the architectural design and construction was carried out by artisans—such as stone masons and carpenters, rising to the role of master builder; until modern times, there was no clear distinction between engineer. In Europe, the titles architect and engineer were geographical variations that referred to the same person used interchangeably.
It is suggested that various developments in technology and mathematics allowed the development of the professional'gentleman' architect, separate from the hands-on craftsman. Paper was not used in Europe for drawing until the 15th century but became available after 1500. Pencils were used more for drawing by 1600; the availability of both allowed pre-construction drawings to be made by professionals. Concurrently, the introduction of linear perspective and innovations such as the use of different projections to describe a three-dimensional building in two dimensions, together with an increased understanding of dimensional accuracy, helped building designers communicate their ideas. However, the development was gradual; until the 18th-century, buildings continued to be designed and set out by craftsmen with the exception of high-status projects. In most developed countries, only those qualified with an appropriate license, certification or registration with a relevant body may practice architecture.
Such licensure requires a university degree, successful completion of exams, as well as a training period. Representation of oneself as an architect through the use of terms and titles is restricted to licensed individuals by law, although in general, derivatives such as architectural designer are not protected. To practice architecture implies the ability to practice independently of supervision; the term building design professional, by contrast, is a much broader term that includes professionals who practice independently under an alternate profession, such as engineering professionals, or those who assist in the practice architecture under the supervision of a licensed architect such as intern architects. In many places, non-licensed individuals may perform design services outside the professional restrictions, such design houses and other smaller structures. In the architectural profession and environmental knowledge and construction management, an understanding of business are as important as design.
However, the design is the driving force throughout the project and beyond. An architect accepts a commission from a client; the commission might involve preparing feasibility reports, building audits, the design of a building or of several buildings and the spaces among them. The architect participates in developing the requirements. Throughout the project, the architect co-ordinates a design team. Structural and electrical engineers and other specialists, are hired by the client or the architect, who must ensure that the work is co-ordinated to construct the design; the architect, once hired by a client, is responsible for creating a design concept that both meets the requirements of that client and provides a facility suitable to the required use. The architect must meet with, question, the client in order to ascertain all the requirements of the planned project; the full brief is not clear at the beginning: entailing a degree of risk in the design undertaking. The architect may make early proposals to the client, which may rework the terms of the brief.
The "program" is essential to producing a project. This is a guide for the architect in creating the design concept. Design proposal are expected to be both imaginative and pragmatic. Depending on the place, finance and available crafts and technology in which the design takes place, the precise extent and nature of these expectations will vary. F oresight is a prerequisite as designing buildings is a complex and demanding undertaking. Any design concept must at a early stage in its generation take into account a great number of issues and variables which include qualities of space, the end-use and life-cycle of these proposed spaces, connections and aspects between spaces including how they are put together as well as the impact of proposals on the immediate and wider locality. Selection of appropriate materials and technology must be considered and reviewed at an early stage in the design to ensure there are no setbacks which may occur later; the site and its environs, as well as the culture and history of the place, will influence the design.
The design must countenance increasing concerns with environmental sustainability. The architect may introduce, to greater or lesser degrees, aspects of mathematics and a