A physical test is a qualitative or quantitative procedure that consists of determination of one or more characteristics of a given product, process or service according to a specified procedure. This is part of an experiment. Physical testing is common in physics and quality assurance. Physical testing might have a variety of purposes, such as: if, or verify that, the requirements of a specification, regulation, or contract are met Decide if a new product development program is on track: Demonstrate proof of concept Demonstrate the utility of a proposed patent Provide standard data for other scientific and quality assurance functions Validate suitability for end-use Provide a basis for Technical communication Provide a technical means of comparison of several options Provide evidence in legal proceedings Some physical testing is performance testing which covers a wide range of engineering or functional evaluations where a material, product, or system is not specified by detailed material or component specifications.
Rather, emphasis is on the final measurable performance characteristics. Testing can be quantitative procedure. Many Acceptance testing protocols employ performance testing e.g. In the stress testing of a new design of chair. Building and Construction Performance Testing Fire protection Packaging Performance Performance Index for Tires Performance Test Code on Compressors and Exhausters Personal protective equipment performance Several Defense Standards Wear of Textiles Environmental chamber Test method Independent test organization Measurement uncertainty Pyzdek, T, "Quality Engineering Handbook", 2003, ISBN 0-8247-4614-7 Godfrey, A. B. "Juran's Quality Handbook", 1999, ISBN 007034003X
Educational technology is "the study and ethical practice of facilitating learning and improving performance by creating and managing appropriate technological processes and resources". Educational technology is the use of educational theoretic, it encompasses several domains including learning theory, computer-based training, online learning, where mobile technologies are used, m-learning. Accordingly, there are several discrete aspects to describing the intellectual and technical development of educational technology: Educational technology as the theory and practice of educational approaches to learning. Educational technology as technological tools and media, for instance massive online courses, that assist in the communication of knowledge, its development and exchange; this is what people are referring to when they use the term "EdTech". Educational technology for learning management systems, such as tools for student and curriculum management, education management information systems.
Educational technology as back-office management, such as training management systems for logistics and budget management, Learning Record Store for learning data storage and analysis. Educational technology itself as an educational subject; the Association for Educational Communications and Technology defined educational technology as "the study and ethical practice of facilitating learning and improving performance by creating and managing appropriate technological processes and resources". It denoted instructional technology as "the theory and practice of design, utilization and evaluation of processes and resources for learning"; as such, educational technology refers to all valid and reliable applied education sciences, such as equipment, as well as processes and procedures that are derived from scientific research, in a given context may refer to theoretical, algorithmic or heuristic processes: it does not imply physical technology. Educational technology is the process of integrating technology into education in a positive manner that promotes a more diverse learning environment and a way for students to learn how to use technology as well as their common assignments.
Educational technology is an inclusive term for both the material tools and the theoretical foundations for supporting learning and teaching. Educational technology is not restricted to high technology but is anything that enhances classroom learning in the utilization of blended, face to face, or online learning. An educational technologist is someone, trained in the field of educational technology. Educational technologists try to analyze, develop and evaluate process and tools to enhance learning. While the term educational technologist is used in the United States, learning technologist is synonymous and used in the UK as well as Canada. Modern electronic educational technology is an important part of society today. Educational technology encompasses e-learning, instructional technology and communication technology in education, EdTech, learning technology, multimedia learning, technology-enhanced learning, computer-based instruction, computer managed instruction, computer-based training, computer-assisted instruction or computer-aided instruction, internet-based training, flexible learning, web-based training, online education, digital educational collaboration, distributed learning, computer-mediated communication, cyber-learning, multi-modal instruction, virtual education, personal learning environments, networked learning, virtual learning environments, m-learning, ubiquitous learning and digital education.
Each of these numerous terms has had its advocates. However, many terms and concepts in educational technology have been defined nebulously. Moreover, Moore saw these terminologies as emphasizing particular features such as digitization approaches, components or delivery methods rather than being fundamentally dissimilar in concept or principle. For example, m-learning emphasizes mobility, which allows for altered timing, location and context of learning. In practice, as technology has advanced, the particular "narrowly defined" terminological aspect, emphasized by name has blended into the general field of educational technology. "virtual learning" as narrowly defined in a semantic sense implied entering an environmental simulation within a virtual world, for example in treating posttraumatic stress disorder. In practice, a "virtual education course" refers to any instructional course in which all, or at least a significant portion, is delivered by the Internet. "Virtual" is used in that broader way to describe a course, not taught in a classroom face-to-face but through a substitute mode that can conceptually be associated "virtually" with classroom teaching, which means that people do not have to go to the physical classroom to learn.
Accordingly, virtual education refers to a form of distance learning in which course content is delivered by various methods such as course management applications, multimedia resources, videoconferencing. Virtual education and simulated learning opportunities, such as games or dissections, offer opportunities for students to connect classroom content to authentic situations. Educational conte
A/B testing is a randomized experiment with two variants, A and B. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is a way to compare two versions of a single variable by testing a subject's response to variant A against variant B, determining which of the two variants is more effective; as the name implies, two versions are compared, which are identical except for one variation that might affect a user's behavior. Version A might be the used version, while version B is modified in some respect. For instance, on an e-commerce website the purchase funnel is a good candidate for A/B testing, as marginal improvements in drop-off rates can represent a significant gain in sales. Significant improvements can sometimes be seen through testing elements like copy text, layouts and colors, but not always. Multivariate testing or multinomial testing is similar to A/B testing, but may test more than two versions at the same time or use more controls.
Simple A/B tests are not valid for observational, quasi-experimental or other non-experimental situations, as is common with survey data, offline data, other, more complex phenomena. A/B testing has been marketed by some as a change in philosophy and business strategy in certain niches, though the approach is identical to a between-subjects design, used in a variety of research traditions. A/B testing as a philosophy of web development brings the field into line with a broader movement toward evidence-based practice; the benefits of A/B testing are considered to be that it can be performed continuously on anything since most marketing automation software now comes with the ability to run A/B tests on an ongoing basis. "Two-sample hypothesis tests" are appropriate for comparing the two samples where the samples are divided by the two control cases in the experiment. Z-tests are appropriate for comparing means under stringent conditions regarding normality and a known standard deviation. Student's t-tests are appropriate for comparing means under relaxed conditions.
Welch's t test assumes the least and is therefore the most used test in a two-sample hypothesis test where the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are used. For a comparison of two binomial distributions such as a click-through rate one would use Fisher's exact test. Like most fields, setting a date for the advent of a new method is difficult because of the continuous evolution of a topic. Where the difference could be defined is when the switch was made from using any assumed information from the populations to a test performed on the samples alone; this work was done in 1908 by William Sealy Gosset when he altered the Z-test to create Student's t-test. Google engineers ran their first A/B test in the year 2000 in an attempt to determine what the optimum number of results to display on its search engine results page would be; the first test was unsuccessful due to glitches. A/B testing research would be more advanced, but the foundation and underlying principles remain the same, in 2011, 11 years after Google’s first test, Google ran over 7,000 different A/B tests.
A company with a customer database of 2,000 people decides to create an email campaign with a discount code in order to generate sales through its website. It creates two versions of the email with different call to action and identifying promotional code. To 1,000 people it sends the email with the call to action stating, "Offer ends this Saturday! Use code A1", to another 1,000 people it sends the email with the call to action stating, "Offer ends soon! Use code B1". All other elements of the emails' copy and layout are identical; the company monitors which campaign has the higher success rate by analyzing the use of the promotional codes. The email using the code A1 has a 5% response rate, the email using the code B1 has a 3% response rate; the company therefore determines that in this instance, the first Call To Action is more effective and will use it in future sales. A more nuanced approach would involve applying statistical testing to determine if the differences in response rates between A1 and B1 were statistically significant.
In the example above, the purpose of the test is to determine, the more effective way to encourage customers to make a purchase. If, the aim of the test had been to see which email would generate the higher click-rate – that is, the number of people who click onto the website after receiving the email – the results might have been different. For example though more of the customers receiving the code B1 accessed the website, because the Call To Action didn't state the end-date of the promotion many of them may feel no urgency to make an immediate purchase. If the purpose of the test had been to see which email would bring more traffic to the website the email containing code B1 might well have been more successful. An A/B test should have a defined outcome, measurable such as number of sales made, click-rate conversion, or number of people signing up/registering. A/B tests most apply the same variant with equ
A standards organization, standards body, standards developing organization, or standards setting organization is an organization whose primary activities are developing, promulgating, amending, interpreting, or otherwise producing technical standards that are intended to address the needs of a group of affected adopters. Most standards are voluntary in the sense that they are offered for adoption by people or industry without being mandated in law; some standards become mandatory when they are adopted by regulators as legal requirements in particular domains. The term formal standard refers to a specification, approved by a standards setting organization; the term de jure standard refers to a standard mandated by legal requirements or refers to any formal standard. In contrast, the term de facto standard refers to a specification that has achieved widespread use and acceptance – without being approved by any standards organization. Examples of de facto standards that were not approved by any standards organizations include the Hayes command set developed by Hayes, Apple's TrueType font design and the PCL protocol used by Hewlett-Packard in the computer printers they produced.
The term standards organization is not used to refer to the individual parties participating within the standards developing organization in the capacity of founders, stakeholders, members or contributors, who themselves may function as the standards organizations. The implementation of standards in industry and commerce became important with the onset of the Industrial Revolution and the need for high-precision machine tools and interchangeable parts. Henry Maudslay developed the first industrially practical screw-cutting lathe in 1800, which allowed for the standardisation of screw thread sizes for the first time. Maudslay's work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization. Joseph Whitworth's screw thread measurements were adopted as the first national standard by companies around the country in 1841, it came to be known as the British Standard Whitworth, was adopted in other countries. By the end of the 19th century differences in standards between companies was making trade difficult and strained.
For instance, an iron and steel dealer recorded his displeasure in The Times: "Architects and engineers specify such unnecessarily diverse types of sectional material or given work that anything like economical and continuous manufacture becomes impossible. In this country no two professional men are agreed upon the size and weight of a girder to employ for given work"; the Engineering Standards Committee was established in London in 1901 as the world's first national standards body. It subsequently extended its standardization work and became the British Engineering Standards Association in 1918, adopting the name British Standards Institution in 1931 after receiving its Royal Charter in 1929; the national standards were adopted universally throughout the country, enabled the markets to act more rationally and efficiently, with an increased level of cooperation. After the First World War, similar national bodies were established in other countries; the Deutsches Institut für Normung was set up in Germany in 1917, followed by its counterparts, the American National Standard Institute and the French Commission Permanente de Standardisation, both in 1918.
By the mid to late 19th century, efforts were being made to standardize electrical measurement. An important figure was R. E. B. Crompton, who became concerned by the large range of different standards and systems used by electrical engineering companies and scientists in the early 20th century. Many companies had entered the market in the 1890s and all chose their own settings for voltage, frequency and the symbols used on circuit diagrams. Adjacent buildings would have incompatible electrical systems because they had been fitted out by different companies. Crompton could see the lack of efficiency in this system and began to consider proposals for an international standard for electric engineering. In 1904, Crompton represented Britain at the Louisiana Purchase Exposition in St. Louis, Missouri, as part of a delegation by the Institute of Electrical Engineers, he presented a paper on standardisation, so well received that he was asked to look into the formation of a commission to oversee the process.
By 1906 his work was complete and he drew up a permanent constitution for the first international standards organization, the International Electrotechnical Commission. The body held its first meeting that year with representatives from 14 countries. In honour of his contribution to electrical standardisation, Lord Kelvin was elected as the body's first President; the International Federation of the National Standardizing Associations was founded in 1926 with a broader remit to enhance international cooperation for all technical standards and specifications. The body was suspended in 1942 during World War II. After the war, ISA was approached by the formed United Nations Standards Coordinating Committee with a proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met in London and agreed to join forces to create the new International Organization for Standardization. Standards organizations can b
Jakob Nielsen (usability consultant)
Jakob Nielsen is a Danish web usability consultant. He holds a Ph. D. in human–computer interaction from the Technical University of Denmark in Copenhagen. Nielsen's earlier affiliations include Bellcore, the Technical University of Denmark, the IBM User Interface Institute at the Thomas J. Watson Research Center. From 1994 to 1998, he was a Sun Microsystems Distinguished Engineer, he was hired to make heavy-duty enterprise software easier to use, since large-scale applications had been the focus of most of his projects at the phone company and IBM. But luckily the job definition of a Distinguished Engineer is "you're supposed to be the world's leading expert in your field, so you figure out what would be most important for the company for you to work on." Therefore, Dr. Nielsen ended up spending most of his time at Sun defining the emerging field of web usability, he was the usability lead for several design rounds of Sun's website and intranet, including the original SunWeb design in 1994. Nielsen is on the editorial board of Morgan Kaufmann Publishers' book series in Interactive Technologies.
Nielsen writes a fortnightly newsletter, Alertbox, on web design matters and has published several books on the subject of web design. After his regular articles on his Web site about usability research attracted media attention, he co-founded usability consulting company Nielsen Norman Group with fellow usability expert Donald Norman. Nielsen founded the "discount usability engineering" movement for fast and cheap improvements of user interfaces and has invented several usability methods, including heuristic evaluation, he holds 79 United States patents on ways of making the Web easier to use. Nielsen gave his name to Nielsen's Law, in which he stated that network connection speeds for high-end home users would increase 50% per year, or double every 21 months; as a corollary, he noted that, since this growth rate is slower than that predicted by Moore's Law of processor power, user experience would remain bandwidth-bound. Nielsen has defined the five quality components of his "Usability Goals", which are: Learnability Efficiency Memorability Errors SatisfactionNielsen has been criticized by some graphic designers for failing to balance the importance of other user experience considerations such as typography, visual cues for hierarchy and importance, eye appeal.
Nielsen has been quoted in the computing and the mainstream press for his criticism of Windows 8's user interface. Tom Hobbs, creative director of the design firm Teague, criticized what he perceived to be some of Nielsen's points on the matter, Nielsen responded with some clarifications; the subsequent short and troubled history of Windows 8, released on October 26,2012, seems to have confirmed Nielsen's criticism: the sales of Windows-based systems plummeted after the introduction of Windows 8. Nielsen's 2012 guidelines that web sites for mobile devices be designed separately from their desktop-oriented counterparts has come under fire from Webmonkey's Scott Gilbertson, as well as Josh Clark writing in.net magazine, Opera's Bruce Lawson, writing in Smashing Magazine, other technologists and web designers who advocate responsive web design. In an interview with.net magazine, Nielsen explained that he wrote his guidelines from a usability perspective, not from the viewpoint of implementation.
However, Nielsen appears to have ignored the emerging software technology related to responsive web design, whereby a single body of code, while requiring more painstaking implementation, can be run on devices of various screen sizes, from mobile screens to desktop monitors. In 2010, Nielsen was listed by Bloomberg Businessweek among 28 "World's Most Influential Designers". In recognition of Nielsen's contributions to usability studies, in 2013 SIGCHI awarded him the Lifetime Practice Award, his published books include: Hypertext and Hypermedia Usability Engineering Designing Web Usability: The Practice of Simplicity E-Commerce User Experience Homepage Usability: 50 Websites Deconstructed Prioritizing Web Usability Eyetracking Web Usability Mobile Usability A list of Jakob Nielsen's research publications is maintained at Interaction-Design.org Usability Web design Website architecture Official website List of articles by Jakob Nielsen Philip Greenspun "What can we learn from Jakob Nielsen?", September 2000 Jakob Nielsen Profile/Criticism by Nico Macdonald published in New Media Creative, March 2001, pp. 38–43 Danielle Dunne, How Should Websites Look?
Jakob Nielsen and Vincent Flanders Speak Up. CIO Magazine. December 1, 2001. Cioinsight.com Jakob Nielsen Interview by v7n, 2006 and another inverview with Webdesigner Depot, 2009 theguardian.com techhive.com npr.org readwrite.com wired.com businessinsider.com sitepoint.com Want Magazine interview a talk by Nielsen at Google on Mobile Usability, 2013
Randomness is the lack of pattern or predictability in events. A random sequence of events, symbols or steps has no order and does not follow an intelligible pattern or combination. Individual random events are by definition unpredictable, but in many cases the frequency of different outcomes over a large number of events is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will occur twice as as 4. In this view, randomness is a measure of uncertainty of an outcome, rather than haphazardness, applies to concepts of chance and information entropy; the fields of mathematics and statistics use formal definitions of randomness. In statistics, a random variable is an assignment of a numerical value to each possible outcome of an event space; this association facilitates the calculation of probabilities of the events. Random variables can appear in random sequences. A random process is a sequence of random variables whose outcomes do not follow a deterministic pattern, but follow an evolution described by probability distributions.
These and other constructs are useful in probability theory and the various applications of randomness. Randomness is most used in statistics to signify well-defined statistical properties. Monte Carlo methods, which rely on random input, are important techniques in science, as, for instance, in computational science. By analogy, quasi-Monte Carlo methods use quasirandom number generators. Random selection, when narrowly associated with a simple random sample, is a method of selecting items from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10. Note that a random selection mechanism that selected 10 marbles from this bowl would not result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable, a random selection mechanism requires equal probabilities for any item to be chosen.
That is, if the selection process is such that each member of a population, of say research subjects, has the same probability of being chosen we can say the selection process is random. In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threw dice to determine fate, this evolved into games of chance. Most ancient cultures used various methods of divination to attempt to circumvent randomness and fate; the Chinese of 3000 years ago were the earliest people to formalize odds and chance. The Greek philosophers discussed randomness at length, but only in non-quantitative forms, it was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention of the calculus had a positive impact on the formal study of randomness. In the 1888 edition of his book The Logic of Chance John Venn wrote a chapter on The conception of randomness that included his view of the randomness of the digits of the number pi by using them to construct a random walk in two dimensions.
The early part of the 20th century saw a rapid growth in the formal analysis of randomness, as various approaches to the mathematical foundations of probability were introduced. In the mid- to late-20th century, ideas of algorithmic information theory introduced new dimensions to the field via the concept of algorithmic randomness. Although randomness had been viewed as an obstacle and a nuisance for many centuries, in the 20th century computer scientists began to realize that the deliberate introduction of randomness into computations can be an effective tool for designing better algorithms. In some cases such randomized algorithms outperform the best deterministic methods. Many scientific fields are concerned with randomness: In the 19th century, scientists used the idea of random motions of molecules in the development of statistical mechanics to explain phenomena in thermodynamics and the properties of gases. According to several standard interpretations of quantum mechanics, microscopic phenomena are objectively random.
That is, in an experiment that controls all causally relevant parameters, some aspects of the outcome still vary randomly. For example, if a single unstable atom is placed in a controlled environment, it cannot be predicted how long it will take for the atom to decay—only the probability of decay in a given time. Thus, quantum mechanics does not specify the outcome of individual experiments but only the probabilities. Hidden variable theories reject the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain statistical distribution are at work behind the scenes, determining the outcome in each case; the modern evolutionary synthesis ascribes the observed diversity of life to random genetic mutations followed by natural selection. The latter retains some random mutations in the gene pool due to the systematically improved chance for survival and reproduction that those mutated genes confer on individuals who possess them.
Several authors claim that evolution and sometimes development require a specific form of randomness, namely the introduction of qualitatively new behaviors. Instead of the choice of one possibility among several pre-given ones, this randomness corresponds to the formation of new possibilities; the characteristics of an organism arise to some extent deterministically and to som
Sun Microsystems, Inc. was an American company that sold computers, computer components and information technology services and created the Java programming language, the Solaris operating system, ZFS, the Network File System, SPARC. Sun contributed to the evolution of several key computing technologies, among them Unix, RISC processors, thin client computing, virtualized computing. Sun was founded on February 24, 1982. At its height, the Sun headquarters were in Santa Clara, California, on the former west campus of the Agnews Developmental Center. On April 20, 2009, it was announced; the deal was completed on January 27, 2010. Sun products included computer servers and workstations built on its own RISC-based SPARC processor architecture, as well as on x86-based AMD Opteron and Intel Xeon processors. Sun developed its own storage systems and a suite of software products, including the Solaris operating system, developer tools, Web infrastructure software, identity management applications. Other technologies included the Java platform and NFS.
In general, Sun was a proponent of open systems Unix. It was a major contributor to open-source software, as evidenced by its $1 billion purchase, in 2008, of MySQL, an open-source relational database management system. At various times, Sun had manufacturing facilities in several locations worldwide, including Newark, California. However, by the time the company was acquired by Oracle, it had outsourced most manufacturing responsibilities; the initial design for what became Sun's first Unix workstation, the Sun-1, was conceived by Andy Bechtolsheim when he was a graduate student at Stanford University in Palo Alto, California. Bechtolsheim designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation, it was designed around the Motorola 68000 processor with an advanced memory management unit to support the Unix operating system with virtual memory support. He built the first ones from spare parts obtained from Stanford's Department of Computer Science and Silicon Valley supply houses.
On February 24, 1982, Vinod Khosla, Andy Bechtolsheim, Scott McNealy, all Stanford graduate students, founded Sun Microsystems. Bill Joy of Berkeley, a primary developer of the Berkeley Software Distribution, joined soon after and is counted as one of the original founders; the Sun name is derived from the initials of the Stanford University Network. Sun was profitable from its first quarter in July 1982. By 1983 Sun was known for producing 68k-based systems with high-quality graphics that were the only computers other than DEC's VAX to run 4.2BSD. It licensed the computer design to other manufacturers, which used it to build Multibus-based systems running Unix from UniSoft. Sun's initial public offering was in 1986 for Sun Workstations; the symbol was changed in 2007 to JAVA. Sun's logo, which features four interleaved copies of the word sun in the form of a rotationally symmetric ambigram, was designed by professor Vaughan Pratt of Stanford; the initial version of the logo was orange and had the sides oriented horizontally and vertically, but it was subsequently rotated to stand on one corner and re-colored purple, blue.
In the dot-com bubble, Sun began making much more money, its shares rose dramatically. It began spending much more, hiring workers and building itself out; some of this was because of genuine demand, but much was from web start-up companies anticipating business that would never happen. In 2000, the bubble burst. Sales in Sun's important hardware division went into free-fall as customers closed shop and auctioned high-end servers. Several quarters of steep losses led to executive departures, rounds of layoffs, other cost cutting. In December 2001, the stock fell to the 1998, pre-bubble level of about $100, but it kept falling, faster than many other tech companies. A year it had dipped below $10 but bounced back to $20. In mid-2004, Sun closed their Newark, California and consolidated all manufacturing to Hillsboro, Oregon. In 2006, the rest of the Newark campus was put on the market. In 2004, Sun canceled two major processor projects which emphasized high instruction-level parallelism and operating frequency.
Instead, the company chose to concentrate on processors optimized for multi-threading and multiprocessing, such as the UltraSPARC T1 processor. The company announced a collaboration with Fujitsu to use the Japanese company's processor chips in mid-range and high-end Sun servers; these servers were announced on April 17, 2007, as the M-Series, part of the SPARC Enterprise series. In February 2005, Sun announced the Sun Grid, a grid computing deployment on which it offered utility computing services priced at US$1 per CPU/hour for processing and per GB/month for storage; this offering built upon an existing 3,000-CPU server farm used for internal R&D for over 10 years, which Sun marketed as being able to achieve 97% utilization. In August 2005, the first commercial use of this grid was announced for financial risk simulations, launched as its first software as a service product. In January 2005, Sun reported a net profit of $19 million for fiscal 2005 second quarter, for the first time in three years.
This was followed by net loss of $9 million on GAAP basis for the third quarter 2005, as reported on April 14, 2005. In January 2007, Sun reported a net GAAP profit of $126