Massachusetts Institute of Technology
The Massachusetts Institute of Technology is a private research university in Cambridge, Massachusetts. Founded in 1861 in response to the increasing industrialization of the United States, MIT adopted a European polytechnic university model and stressed laboratory instruction in applied science and engineering; the Institute is a land-grant, sea-grant, space-grant university, with a campus that extends more than a mile alongside the Charles River. Its influence in the physical sciences and architecture, more in biology, linguistics and social science and art, has made it one of the most prestigious universities in the world. MIT is ranked among the world's top universities; as of March 2019, 93 Nobel laureates, 26 Turing Award winners, 8 Fields Medalists have been affiliated with MIT as alumni, faculty members, or researchers. In addition, 58 National Medal of Science recipients, 29 National Medals of Technology and Innovation recipients, 50 MacArthur Fellows, 73 Marshall Scholars, 45 Rhodes Scholars, 41 astronauts, 16 Chief Scientists of the US Air Force have been affiliated with MIT.
The school has a strong entrepreneurial culture, the aggregated annual revenues of companies founded by MIT alumni would rank as the tenth-largest economy in the world. MIT is a member of the Association of American Universities. In 1859, a proposal was submitted to the Massachusetts General Court to use newly filled lands in Back Bay, Boston for a "Conservatory of Art and Science", but the proposal failed. A charter for the incorporation of the Massachusetts Institute of Technology, proposed by William Barton Rogers, was signed by the governor of Massachusetts on April 10, 1861. Rogers, a professor from the University of Virginia, wanted to establish an institution to address rapid scientific and technological advances, he did not wish to found a professional school, but a combination with elements of both professional and liberal education, proposing that: The true and only practicable object of a polytechnic school is, as I conceive, the teaching, not of the minute details and manipulations of the arts, which can be done only in the workshop, but the inculcation of those scientific principles which form the basis and explanation of them, along with this, a full and methodical review of all their leading processes and operations in connection with physical laws.
The Rogers Plan reflected the German research university model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars and laboratories. Two days after MIT was chartered, the first battle of the Civil War broke out. After a long delay through the war years, MIT's first classes were held in the Mercantile Building in Boston in 1865; the new institute was founded as part of the Morrill Land-Grant Colleges Act to fund institutions "to promote the liberal and practical education of the industrial classes" and was a land-grant school. In 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst. In 1866, the proceeds from land sales went toward new buildings in the Back Bay. MIT was informally called "Boston Tech"; the institute adopted the European polytechnic university model and emphasized laboratory instruction from an early date. Despite chronic financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker.
Programs in electrical, chemical and sanitary engineering were introduced, new buildings were built, the size of the student body increased to more than one thousand. The curriculum drifted with less focus on theoretical science; the fledgling school still suffered from chronic financial shortages which diverted the attention of the MIT leadership. During these "Boston Tech" years, MIT faculty and alumni rebuffed Harvard University president Charles W. Eliot's repeated attempts to merge MIT with Harvard College's Lawrence Scientific School. There would be at least six attempts to absorb MIT into Harvard. In its cramped Back Bay location, MIT could not afford to expand its overcrowded facilities, driving a desperate search for a new campus and funding; the MIT Corporation approved a formal agreement to merge with Harvard, over the vehement objections of MIT faculty and alumni. However, a 1917 decision by the Massachusetts Supreme Judicial Court put an end to the merger scheme. In 1916, the MIT administration and the MIT charter crossed the Charles River on the ceremonial barge Bucentaur built for the occasion, to signify MIT's move to a spacious new campus consisting of filled land on a mile-long tract along the Cambridge side of the Charles River.
The neoclassical "New Technology" campus was designed by William W. Bosworth and had been funded by anonymous donations from a mysterious "Mr. Smith", starting in 1912. In January 1920, the donor was revealed to be the industrialist George Eastman of Rochester, New York, who had invented methods of film production and processing, founded Eastman Kodak. Between 1912 and 1920, Eastman donated $20 million in cash and Kodak stock to MIT. In the 1930s, President Karl Taylor Compton and Vice-President Vannevar Bush emphasized the importance of pure sciences like physics and chemistry and reduced the vocational practice required in shops and drafting studios; the Compton reforms "renewed confidence in the ability of the Institute to develop leadership in science as well as in engineering". Unlike Ivy League schools, MIT catered more to middle-class families, depended more on tuition than on endow
Stuart Madnick
Stuart E. Madnick is an American computer scientist, professor of information technology at the MIT Sloan School of Management and the Massachusetts Institute of Technology school of engineering, he is the director of the MIT Interdisciplinary Consortium for Improving Critical Infrastructure Cybersecurity. Madnick has degrees in Electrical Engineering and Computer Science from MIT. Madnick has been a faculty member at MIT since 1972, he has served as the head of MIT's Information Technologies Group for more than twenty years. He has been an affiliate member of MIT's Laboratory for Computer Science, a member of the research advisory committee of the International Financial Services Research Center, a member of the executive committee of the MIT Center for Information Systems Research. In 2010, Madnick was the John Norris Maguire Professor of Information Technology at the MIT Sloan School of Management and Professor of Engineering Systems at the MIT School of Engineering, he has been a Visiting Professor at Harvard University, Nanyang Technological University, University of Newcastle and Victoria University.
His current research interests include connectivity among disparate distributed information systems, database technology, software project management, the strategic use of information technology. He is co-director of the PROductivity From Information Technology Initiative and co-Heads the Total Data Quality Management research program, he has been the Principal Investigator of a large-scale DARPA-funded research effort on Context Interchange which involves the development of technology that helps organizations to work more cooperatively and collaboratively. As part of this effort, he is the co-inventor on the patents "Querying Heterogeneous Data Sources over a Network Using Context Interchange" and "Data Extraction from World Wide Web Pages." He has been active in industry, making significant contributions as a key designer and developer of projects such as IBM's VM/370 operating system and Lockheed's DIALOG information retrieval system. He has served as a consultant to many major corporations, such as IBM, AT&T, Citicorp.
He has been the founder or co-founder of several high-tech firms, including Intercomp, Cambridge Institute for Information Systems, founded with John J. Donovan, iAggregate, now operates a hotel in the 14th century Langley Castle in England. Madnick is involved with the research effort at BMLL Technologies, a Cambridge spin-off working in the field of machine learning on the limit order book. Madnick is the author or co-author of over 250 books, articles, or reports including the textbook, Operating Systems, The Dynamics of Software Development, he has contributed chapters to other books, such as Information Technology in Action. In 1965, he developed the little man computer model, still used to introduce computer architecture concepts. In 1968, Madnick developed SCRIPT, a text markup language for IBM z/VM and z/OS systems, still being used as a part of IBM's Document Composition Facility with the current version called SCRIPT/VS. Http://esd.mit.edu/Faculty_Pages/madnick/madnick.htm
Web application
In computing, a web application or web app is a client–server computer program which the client runs in a web browser. Common web applications include webmail, online retail sales, online auction; the general distinction between a dynamic web page of any kind and a "web application" is unclear. Web sites most to be referred to as "web applications" are those which have similar functionality to a desktop software application, or to a mobile app. HTML5 introduced explicit language support for making applications that are loaded as web pages, but can store data locally and continue to function while offline. Single-page applications are more application-like because they reject the more typical web paradigm of moving between distinct pages with different URLs. Single-page frameworks like Sencha Touch and AngularJS might be used to speed development of such a web app for a mobile platform. There are several ways of targeting mobile devices when making a web application: Responsive web design can be used to make a web application - whether a conventional website or a single-page application viewable on small screens and work well with touchscreens.
Progressive Web Apps are web applications that load like regular web pages or websites but can offer the user functionality such as working offline, push notifications, device hardware access traditionally available only to native mobile applications. Native apps or "mobile apps" run directly on a mobile device, just as a conventional software application runs directly on a desktop computer, without a web browser. Frameworks like React Native, Flutter and FuseTools allow the development of native apps for all platforms using languages other than each standard native language. Hybrid apps embed a mobile web site inside a native app using a hybrid framework like Apache Cordova and Ionic or Appcelerator Titanium; this allows development using web technologies while retaining certain advantages of native apps. In earlier computing models like client–server, the processing load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own pre-compiled client program which served as its user interface and had to be separately installed on each user's personal computer.
An upgrade to the server-side code of the application would also require an upgrade to the client-side code installed on each user workstation, adding to the support cost and decreasing productivity. In addition, both the client and server components of the application were tightly bound to a particular computer architecture and operating system and porting them to others was prohibitively expensive for all but the largest applications. In contrast, web applications use web documents written in a standard format such as HTML and JavaScript, which are supported by a variety of web browsers. Web applications can be considered as a specific variant of client–server software where the client software is downloaded to the client machine when visiting the relevant web page, using standard procedures such as HTTP. Client web software updates may happen each time. During the session, the web browser interprets and displays the pages, acts as the universal client for any web application. In the early days of the Web, each individual web page was delivered to the client as a static document, but the sequence of pages could still provide an interactive experience, as user input was returned through web form elements embedded in the page markup.
However, every significant change to the web page required a round trip back to the server to refresh the entire page. In 1995, Netscape introduced a client-side scripting language called JavaScript allowing programmers to add some dynamic elements to the user interface that ran on the client side. So instead of sending data to the server in order to generate an entire web page, the embedded scripts of the downloaded page can perform various tasks such as input validation or showing/hiding parts of the page. In 1996, Macromedia introduced Flash, a vector animation player that could be added to browsers as a plug-in to embed animations on the web pages, it allowed the use of a scripting language to program interactions on the client side with no need to communicate with the server. In 1999, the "web application" concept was introduced in the Java language in the Servlet Specification version 2.2.. At that time both JavaScript and XML had been developed, but Ajax had still not yet been coined and the XMLHttpRequest object had only been introduced on Internet Explorer 5 as an ActiveX object.
In 2005, the term Ajax was coined, applications like Gmail started to make their client sides more and more interactive. A web page script is able to contact the server for storing/retrieving data without downloading an entire web page. In 2011, HTML5 was finalized, which provides graphic and multimedia capabilities without the need of client side plug-ins. HTML5 enriched the semantic content of documents; the APIs and document object model are no longer afterthoughts, but are fundamental parts of the HTML5 specification. WebGL API paved the way for advanced 3D graphics based on JavaScript language; these have significant importance in creating platform and browser indepen
Image editing
Image editing encompasses the processes of altering images, whether they are digital photographs, traditional photo-chemical photographs, or illustrations. Traditional analog image editing is known as photo retouching, using tools such as an airbrush to modify photographs, or editing illustrations with any traditional art medium. Graphic software programs, which can be broadly grouped into vector graphics editors, raster graphics editors, 3D modelers, are the primary tools with which a user may manipulate and transform images. Many image editing programs are used to render or create computer art from scratch. Raster images pixels; these pixels contain the image's color and brightness information. Image editors can change the pixels to enhance the image in many ways; the pixels can be changed as a group, or individually, by the sophisticated algorithms within the image editors. This article refers to bitmap graphics editors, which are used to alter photographs and other raster graphics. However, vector graphics software, such as Adobe Illustrator, CorelDRAW, Xara Designer Pro or Inkscape, are used to create and modify vector images, which are stored as descriptions of lines, Bézier curves, text instead of pixels.
It is easier to rasterize a vector image. Vector images can be modified more because they contain descriptions of the shapes for easy rearrangement, they are scalable, being rasterizable at any resolution. Camera or computer image editing programs offer basic automatic image enhancement features that correct color hue and brightness imbalances as well as other image editing features, such as red eye removal, sharpness adjustments, zoom features and automatic cropping; these are called automatic because they happen without user interaction or are offered with one click of a button or mouse button or by selecting an option from a menu. Additionally, some automatic editing features offer a combination of editing actions with little or no user interaction. Many image file formats use data compression to save storage space. Digital compression of images may take place in the camera, or can be done in the computer with the image editor; when images are stored in JPEG format, compression has taken place.
Both cameras and computer programs allow the user to set the level of compression. Some compression algorithms, such as those used in PNG file format, are lossless, which means no information is lost when the file is saved. By contrast, the JPEG file format uses a lossy compression algorithm by which the greater the compression, the more information is lost reducing image quality or detail that can not be restored. JPEG uses knowledge of the way the human brain and eyes perceive color to make this loss of detail less noticeable. Listed below are some of the most used capabilities of the better graphic manipulation programs; the list is by no means all inclusive. There are a myriad of choices associated with the application of most of these features. One of the prerequisites for many of the applications mentioned below is a method of selecting part of an image, thus applying a change selectively without affecting the entire picture. Most graphics programs have several means of accomplishing this, such as: a marquee tool for selecting rectangular or other regular polygon-shaped regions, a lasso tool for freehand selection of a region, a magic wand tool that selects objects or regions in the image defined by proximity of color or luminance, vector-based pen tools,as well as more advanced facilities such as edge detection, alpha compositing, color and channel-based extraction.
The border of a selected area in an image is animated with the marching ants effect to help the user to distinguish the selection border from the image background. Another feature common to many graphics applications is that of Layers, which are analogous to sheets of transparent acetate, stacked on top of each other, each capable of being individually positioned and blended with the layers below, without affecting any of the elements on the other layers; this is a fundamental workflow which has become the norm for the majority of programs on the market today, enables maximum flexibility for the user while maintaining non-destructive editing principles and ease of use. Image editors can resize images in a process called image scaling, making them larger, or smaller. High image resolution cameras can produce large images which are reduced in size for Internet use. Image editor programs use a mathematical process called resampling to calculate new pixel values whose spacing is larger or smaller than the original pixel values.
Images for Internet use are kept small, say 640 x 480 pixels which would equal 0.3 megapixels. Digital editors are used to crop images. Cropping creates a new image by selecting a desired rectangular portion from the image being cropped; the unwanted part of the image is discarded. Image cropping does not reduce the resolution of the area cropped. Best results are obtained. A primary reason for cropping is to improve the image composition in the new image. Using a selection tool, the outline of the figure or element in the picture is traced/selected, the background is removed. Depending on how intricate the "edge" is this may be less difficult to do cleanly. For example, individual hairs can require a lot of work. Hence the use of the "green screen" technique which allows one to remove the bac
Amar Gupta
Amar Gupta is a computer scientist from Gujarat and now based in the United States. Gupta is the former Dean of the Seidenberg School of Computer Science and Information Systems at Pace University, USA, he is the Thomas R. Brown Professor of Management and Technology in the Eller College of Management at the University of Arizona, USA, he is a Professor of Computer Science in College of Science, Professor of Latin American Studies in College of Social and Behavioral Sciences, Professor of Community and Policy in Mel & Enid Zuckerman College of Public Health, Professor at James E. Rogers College of Law, Member of the HOPE Center in College of Pharmacy, the Director of Nexus of Entrepreneurship and Technology Initiative at the University of Arizona. Gupta was born in 1953 in Nadiad, Gujarat and studied electrical engineering at the Indian Institute of Technology, graduating in 1974, he started his career at IBM served in various technical advisory roles for the Government of India before pursuing graduate studies with the MIT Sloan School of Management in 1979.
In 1980, he received a master's degree in management from MIT, a PhD from the Indian Institute of Technology, Delhi. He remained at Sloan until 2004, was the first person to attain the rank of Senior Research Scientist at MIT Sloan. In this position, in cooperation with Professor Lester Thurow, he launched the United States’ first course on international outsourcing, he has served as an advisor to several UN organizations including World Health Organization, United Nations Development Program, UNIDO, the World Bank on various aspects of national policy and large-scale information management in the context of the needs of both individual agencies and member governments. He led a UNDP team to plan and implement a national financial information infrastructure in a Latin American country where 40 percent of the banks had gone bankrupt. Gupta was part of the expert group established by the WHO to formulate policy guidelines for health informatics; these guidelines were subsequently ratified as national guidelines by over 100 countries.
He served as an UNDP advisor on a $500 million nationwide effort to get computers into every school in Brazil, as World Bank advisor on Distance Education endeavor to Mozambique. He secured approval for the proposal to establish two UN Centers of Excellence in Information Technology. Gupta's current appointment is at the Eller College of Management at the University of Arizona as the Thomas R. Brown Professor of Management and Technology, Professor of Entrepreneurship and MIS. In this role, he has established dual degree programs with the Colleges of Agriculture, Engineering and Optics; the program was designed to lead to a certificate in entrepreneurship. Gupta played a significant role in creating the vision for new interdisciplinary research initiatives, such as the proposed multi-college endeavor that would enable United States and Mexico to enhance healthcare in bordering areas through mutual cooperation without investing any additional funds. Another potential endeavor involves the creation of a new International Center of Excellence funded through private donations.
As the Founder and Head of the “Nexus of Entrepreneurship and Technology” initiative, Gupta has interacted with the trustees of the Thomas R. Brown Foundation to delineate and refine ideas that are of high interest to the individuals who have sponsored the endowed chair; the concerned foundation has paid more for this chair, as compared to the original commitment. Gupta's lectures include guest speakers from major global businesses, the healthcare industry, foreign government ministers; as a Tenured and Endowed Professor in the field of Entrepreneurship, he has developed new courses that focus on innovation and entrepreneurship in a global economy through the deployment of distributed work teams. On April 25, 2011, Gupta was presented with the Outstanding Faculty Member of the Year award for Eller College in small class size. Among Gupta's most distinguished students are Ronjon Nag, Founder of Lexicus and Cellmania, Nitin Nohria, current Dean of Harvard Business School, Salman Khan, founder of Khan Academy.
Gupta has continued to maintain active ties with MIT, including as Visiting Professor, Visiting Senior Scientist, Visiting Scientist in the College of Engineering of MIT during the summers of 2005-2011. Gupta served as chief scientist and vice president for the development of VCN ExecuVision, the first presentation graphics program; the company, Visual Communications Network pioneered the development of clip art for the IBM personal computer. At MIT, Gupta led a team of researchers to develop technology to automatically read handwritten information on checks and proposed a nationwide check clearance system, allowing the electronic clearance of printed and handwritten checks; this innovation is manifested in the Check 21 system in the U. S. and in similar approaches in Singapore, Brazil. Gupta and his colleagues developed the first microcomputer-based image database management system; the concept of the 24-Hour Knowledge Factory was developed by Gupta. This concept allows for multiple professionals in different geographical locations to work together to perform a single task or project.
Research is being conducted to utilize this model in a variety of industries. In 2007, Gupta was awarded an IBM faculty award for this vision; the 24-hour Knowledge Factory is inspired by the Industrial Revolution. Prior to the Revolution, manufacturing was a cottage industry whe
Reverse image search
Reverse image search is a content-based image retrieval query technique that involves providing the CBIR system with a sample image that it will base its search upon. In particular, reverse image search is characterized by a lack of search terms; this removes the need for a user to guess at keywords or terms that may or may not return a correct result. Reverse image search allows users to discover content, related to a specific sample image, popularity of an image, discover manipulated versions and derivative works. Reverse image search may be used to: Locate the source of an image Find higher resolution versions Discover webpages where the image appears Track down the content creator Get information about an image Commonly used reverse image search algorithms include: Scale-invariant feature transform - to extract local features of an image Maximally stable extremal regions Vocabulary Tree Google's Search by image is a feature that utilizes reverse image search and allows users to search for related images just by uploading an image or image URL.
Google accomplishes this by analyzing the submitted picture and constructing a mathematical model of it using advanced algorithms. It is compared with billions of other images in Google's databases before returning matching and similar results, it should be noted that when available, Google uses metadata about the image such as description. TinEye is a search engine specializing in reverse image search. Upon submitting an image, TinEye creates a "unique and compact digital signature or fingerprint" of said image and matches it with other indexed images; this procedure is able to match heavily edited versions of the submitted image, but will not return similar images in the results. EBay ShopBot uses reverse image search to find products by a user uploaded photo. EBay uses a ResNet-50 network for category recognition, image hashes are stored in Google Bigtable. SK Planet uses reverse image search to find related fashion items on its e-commerce website, it developed the vision encoder network based on the TensorFlow inception-v3, with speed of convergence and generalization for production usage.
A recurrent neural network is used for multi-class classification, fashion-product region-of interest detection is based on Faster R-CNN. SK Planet's reverse image search system is built in under 100 man-months. Alibaba released the Pailitao application in 2014. Pailitao allows users to search for items on Alibaba's E-commercial platform by taking a photo of the query object; the Pailitao application uses a deep CNN model with branches for joint detection and feature learning to discover the detection mask and exact discriminative feature without background disturbance. GoogLeNet V1 is employed as the base model for category feature learning. Pinterest introduced visual search on its platform. In 2015, Pinterest published a paper at the ACM Conference on Knowledge Discovery and Data Mining conference and disclosed the architecture of the system; the pipeline uses Apache Hadoop, the open-source Caffe convolutional neural network framework, Cascading for batch processing, PinLater for messaging, Apache HBase for storage.
Image characteristics, including local features, deep features, salient color signatures and salient pixels are extracted from user uploads. The system runs on Amazon EC2, only requires a cluster of 5 GPU instances to handle daily image uploads onto Pinterest. By using reverse image search, Pinterest is able to extract visual features from fashion objects and offer product recommendations that look similar. Microsoft Research Asia's Beijing Lab published a paper in the Proceedings of the IEEE on the Arista-SS and the Arista-DS systems. Arista-DS only performs duplicate search algorithms such as principal component analysis on global image features to lower computational and memory costs. Arista-DS is able to perform duplicate search on 2 billion images with 10 servers but with the trade-off of not detecting near duplicates. Google Images Bing Yandex Images Searchbyimages powered by Google Content-based image retrieval Visual search engine FindFace
Metadata
Metadata is "data that provides information about other data". Many distinct types of metadata exist, among these descriptive metadata, structural metadata, administrative metadata, reference metadata and statistical metadata. Descriptive metadata describes a resource for purposes such as identification, it can include elements such as title, abstract and keywords. Structural metadata is metadata about containers of data and indicates how compound objects are put together, for example, how pages are ordered to form chapters, it describes the types, versions and other characteristics of digital materials. Administrative metadata provides information to help manage a resource, such as when and how it was created, file type and other technical information, who can access it. Reference metadata describes the contents and quality of statistical data Statistical metadata may describe processes that collect, process, or produce statistical data. Metadata was traditionally used in the card catalogs of libraries until the 1980s, when libraries converted their catalog data to digital databases.
In the 2000s, as digital formats were becoming the prevalent way of storing data and information, metadata was used to describe digital data using metadata standards. The first description of "meta data" for computer systems is purportedly noted by MIT's Center for International Studies experts David Griffel and Stuart McIntosh in 1967: "In summary we have statements in an object language about subject descriptions of data and token codes for the data. We have statements in a meta language describing the data relationships and transformations, ought/is relations between norm and data."There are different metadata standards for each different discipline. Describing the contents and context of data or data files increases its usefulness. For example, a web page may include metadata specifying what software language the page is written in, what tools were used to create it, what subjects the page is about, where to find more information about the subject; this metadata can automatically improve the reader's experience and make it easier for users to find the web page online.
A CD may include metadata providing information about the musicians and songwriters whose work appears on the disc. A principal purpose of metadata is to help users discover resources. Metadata helps to organize electronic resources, provide digital identification, support the archiving and preservation of resources. Metadata assists users in resource discovery by "allowing resources to be found by relevant criteria, identifying resources, bringing similar resources together, distinguishing dissimilar resources, giving location information." Metadata of telecommunication activities including Internet traffic is widely collected by various national governmental organizations. This data can be used for mass surveillance. In many countries, the metadata relating to emails, telephone calls, web pages, video traffic, IP connections and cell phone locations are stored by government organizations. Metadata means "data about data". Although the "meta" prefix means "after" or "beyond", it is used to mean "about" in epistemology.
Metadata is defined as the data providing information about one or more aspects of the data. Some examples include:Means of creation of the data Purpose of the data Time and date of creation Creator or author of the data Location on a computer network where the data was created Standards used File size Data quality Source of the data Process used to create the dataFor example, a digital image may include metadata that describes how large the picture is, the color depth, the image resolution, when the image was created, the shutter speed, other data. A text document's metadata may contain information about how long the document is, who the author is, when the document was written, a short summary of the document. Metadata within web pages can contain descriptions of page content, as well as key words linked to the content; these links are called "Metatags", which were used as the primary factor in determining order for a web search until the late 1990s. The reliance of metatags in web searches was decreased in the late 1990s because of "keyword stuffing".
Metatags were being misused to trick search engines into thinking some websites had more relevance in the search than they did. Metadata can be stored and managed in a database called a metadata registry or metadata repository. However, without context and a point of reference, it might be impossible to identify metadata just by looking at it. For example: by itself, a database containing several numbers, all 13 digits long could be the results of calculations or a list of numbers to plug into an equation - without any other context, the numbers themselves can be perceived as the data, but if given the context that this database is a log of a book collection, those 13-digit numbers may now be identified as ISBNs - information that refers to the book, but is not itself the information within the book. The term "metadata" was coined in 1968 by Philip Bagley, in his book "Extension of Programming Language Concepts" where it is clear that he uses the term in the ISO 11179 "traditional" sense, "structural metadata" i.e. "data about the containers of data".