AI Artificial Intelligence – About

AI Artificial Intelligence

From Wikipedia, the free encyclopedia

Join the related FREE PM templates related Facebook Group (LIKE Facebook page) and LinkedIn Group. Click here to purchase 70+ project management templates with FREE upgrades thereafter.

Project Management Templates for both Agile and Waterfall project planning and tracking.

A

B

C

D

  • Darkforest – is a computer go program developed by Facebook, based on deep learning techniques using a convolutional neural network. Its updated version Darkfores2 combines the techniques of its predecessor with Monte Carlo tree search.[125][126] The MCTS effectively takes tree search methods commonly seen in computer chess programs and randomizes them.[127] With the update, the system is known as Darkfmcts3.[128]
  • Dartmouth workshop – The Dartmouth Summer Research Project on Artificial Intelligence was the name of a 1956 summer workshop now considered by many[129][130] (though not all[131]) to be the seminal event for artificial intelligence as a field.
  • Data fusion – is the process of integrating multiple data sources to produce more consistent, accurate, and useful information than that provided by any individual data source.[132]
  • Data integration – involves combining data residing in different sources and providing users with a unified view of them.[133] This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies need to merge their databases) and scientific (combining research results from different bioinformatics repositories, for example) domains. Data integration appears with increasing frequency as the volume (that is, big data) and the need to share existing data explodes.[134] It has become the focus of extensive theoretical work, and numerous open problems remain unsolved.
  • Data mining – is the process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
  • Data science – is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from data in various forms, both structured and unstructured,[135][136] similar to data mining. Data science is a “concept to unify statistics, data analysis, machine learning and their related methods” in order to “understand and analyze actual phenomena” with data.[137] It employs techniques and theories drawn from many fields within the context of mathematicsstatisticsinformation science, and computer science.
  • Data set – (or dataset) is a collection of data. Most commonly a data set corresponds to the contents of a single database table, or a single statistical data matrix, where every column of the table represents a particular variable, and each row corresponds to a given member of the data set in question. The data set lists values for each of the variables, such as height and weight of an object, for each member of the data set. Each value is known as a datum. The data set may comprise data for one or more members, corresponding to the number of rows.
  • Data warehouse – (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis.[138] DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place[139]
  • Datalog – is a declarative logic programming language that syntactically is a subset of Prolog. It is often used as a query language for deductive databases. In recent years, Datalog has found new application in data integrationinformation extractionnetworkingprogram analysissecurity, and cloud computing.[140]
  • Decision boundary – In the case of backpropagation based artificial neural networks or perceptrons, the type of decision boundary that the network can learn is determined by the number of hidden layers the network has. If it has no hidden layers, then it can only learn linear problems. If it has one hidden layer, then it can learn any continuous function on compact subsets of Rn as shown by the Universal approximation theorem, thus it can have an arbitrary decision boundary.
  • Decision support system – (DSS), is an information system that supports business or organizational decision-making activities. DSSs serve the management, operations and planning levels of an organization (usually mid and higher management) and help people make decisions about problems that may be rapidly changing and not easily specified in advance—i.e. unstructured and semi-structured decision problems. Decision support systems can be either fully computerized or human-powered, or a combination of both.
  • Decision theory – (or the theory of choice) is the study of the reasoning underlying an agent’s choices.[141] Decision theory can be broken into two branches: normative decision theory, which gives advice on how to make the best decisions given a set of uncertain beliefs and a set of values, and descriptive decision theory which analyzes how existing, possibly irrational agents actually make decisions.
  • Decision tree learning – uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item’s target value (represented in the leaves). It is one of the predictive modeling approaches used in statisticsdata mining and machine learning.
  • Declarative programming – is a programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow.[142]
  • Deductive classifier – is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values.
  • Deep Blue – was a chess-playing computer developed by IBM. It is known for being the first computer chess-playing system to win both a chess game and a chess match against a reigning world champion under regular time controls.
  • Deep learning – (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervisedsemi-supervised or unsupervised.[143][144][145]
  • DeepMind – DeepMind Technologies is a British artificial intelligence company founded in September 2010, currently owned by Alphabet Inc. The company is based in London, with research centres in Canada,[146] France,[147] and the United StatesAcquired by Google in 2014, the company has created a neural network that learns how to play video games in a fashion similar to that of humans,[148] as well as a Neural Turing machine,[149] or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.[150][151] The company made headlines in 2016 after its AlphaGo program beat a human professional Go player Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film.[152] A more general program, AlphaZero, beat the most powerful programs playing gochess and shogi (Japanese chess) after a few days of play against itself using reinforcement learning.[153]
  • Default logic – is a non-monotonic logic proposed by Raymond Reiter to formalize reasoning with default assumptions.
  • Description logic – Description logics (DL) are a family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy descriptions logics, and each description logic features a different balance between DL expressivity and reasoning complexity by supporting different sets of mathematical constructors.[154]
  • Developmental robotics – (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines.
  • Diagnosis – is concerned with the development of algorithms and techniques that are able to determine whether the behaviour of a system is correct. If the system is not functioning correctly, the algorithm should be able to determine, as accurately as possible, which part of the system is failing, and which kind of fault it is facing. The computation is based on observations, which provide information on the current behaviour.
  • Dialogue system – or conversational agent (CA), is a computer system intended to converse with a human with a coherent structure. Dialogue systems have employed text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.
  • Dimensionality reduction – or dimension reduction, is the process of reducing the number of random variables under consideration[155] by obtaining a set of principal variables. It can be divided into feature selection and feature extraction.[156]
  • Discrete system – is a system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models. A computer is a finite state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals.
  • Distributed artificial intelligence – (DAI), also called Decentralized Artificial Intelligence,[157] is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems.
  • Dynamic epistemic logic – (DEL), is a logical framework dealing with knowledge and information change. Typically, DEL focuses on situations involving multiple agents and studies how their knowledge changes when events occur.

E

F

  • Fast-and-frugal trees – a type of classification tree. Fast-and-frugal trees can be used as decision-making tools which operate as lexicographic classifiers, and, if required, associate an action (decision) to each class or category.[172]
  • Feature extraction – In machine learningpattern recognition and in image processing, feature extraction starts from an initial set of measured data and builds derived values (features) intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations.
  • Feature learning – In machine learning, feature learning or representation learning[173] is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.
  • Feature selection – In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction.
  • Federated learning – a type of machine learning that allows for training on multiple devices with decentralized data, thus helping preserve the privacy of individual users and their data.
  • First-order logic (also known as first-order predicate calculus and predicate logic) – a collection of formal systems used in mathematicsphilosophylinguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form “there exists X such that X is Socrates and X is a man” and there exists is a quantifier while X is a variable.[174] This distinguishes it from propositional logic, which does not use quantifiers or relations.[175]
  • Fluent – a condition that can change over time. In logical approaches to reasoning about actions, fluents can be represented in first-order logic by predicates having an argument that depends on time.
  • Formal language – a set of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules.
  • Forward chaining – (or forward reasoning) is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systemsbusiness and production rule systems. The opposite of forward chaining is backward chaining. Forward chaining starts with the available data and uses inference rules to extract more data (from an end user, for example) until a goal is reached. An inference engine using forward chaining searches the inference rules until it finds one where the antecedent (If clause) is known to be true. When such a rule is found, the engine can conclude, or infer, the consequent (Then clause), resulting in the addition of new information to its data.[176]
  • Frame – an artificial intelligence data structure used to divide knowledge into substructures by representing “stereotyped situations.” Frames are the primary data structure used in artificial intelligence frame language.
  • Frame language – a technology used for knowledge representation in artificial intelligence. Frames are stored as ontologies of sets and subsets of the frame concepts. They are similar to class hierarchies in object-oriented languages although their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus on encapsulation and information hiding. Frames originated in AI research and objects primarily in software engineering. However, in practice the techniques and capabilities of frame and object-oriented languages overlap significantly.
  • Frame problem – is the problem of finding adequate collections of axioms for a viable description of a robot environment.[177]
  • Friendly artificial intelligence (also friendly AI or FAI) – a hypothetical artificial general intelligence (AGI) that would have a positive effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.
  • Futures studies – is the study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them.[178]
  • Fuzzy control system – a control system based on fuzzy logic—a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively).[179][180]
  • Fuzzy logic – a simple form for the many-valued logic, in which the truth values of variables may have any degree of “Truthfulness” that can be represented by any real number in the range between 0 (as in Completely False) and 1 (as in Completely True) inclusive. Consequently, It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. In contrast to Boolean logic, where the truth values of variables may have the integer values 0 or 1 only.
  • Fuzzy rule – Fuzzy rules are used within fuzzy logic systems to infer an output based on input variables.
  • Fuzzy set – In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition — an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions (aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only take values 0 or 1.[181] In fuzzy set theory, classical bivalent sets are usually called crisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such as bioinformatics.[182]

G

H

  • Heuristic – is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. A heuristic function, also called simply a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution.[188]
  • Hidden layer – an internal layer of neurons in an artificial neural network, not dedicated to input or output
  • Hidden unit – an neuron in a hidden layer in an artificial neural network
  • Hyper-heuristic – is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several simpler heuristics (or components of such heuristics) to efficiently solve computational search problems. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.[189][190][191]

I

J

K

L

M

N

  • Naive Bayes classifier – In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes’ theorem with strong (naive) independence assumptions between the features.
  • Naive semantics – is an approach used in computer science for representing basic knowledge about a specific domain, and has been used in applications such as the representation of the meaning of natural language sentences in artificial intelligence applications. In a general setting the term has been used to refer to the use of a limited store of generally understood knowledge about a specific domain in the world, and has been applied to fields such as the knowledge based design of data schemas.[221]
  • Name binding – In programming languages, name binding is the association of entities (data and/or code) with identifiers.[222] An identifier bound to an object is said to reference that object. Machine languages have no built-in notion of identifiers, but name-object bindings as a service and notation for the programmer is implemented by programming languages. Binding is intimately connected with scoping, as scope determines which names bind to which objects – at which locations in the program code (lexically) and in which one of the possible execution paths (temporally). Use of an identifier id in a context that establishes a binding for id is called a binding (or defining) occurrence. In all other occurrences (e.g., in expressions, assignments, and subprogram calls), an identifier stands for what it is bound to; such occurrences are called applied occurrences.
  • Named-entity recognition – (NER), (also known as entity identification, entity chunking and entity extraction) is a subtask of information extraction that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.
  • Named graph – Named graphs are a key concept of Semantic Web architecture in which a set of Resource Description Framework statements (a graph) are identified using a URI,[223] allowing descriptions to be made of that set of statements such as context, provenance information or other such metadata. Named graphs are a simple extension of the RDF data model[224] through which graphs can be created but the model lacks an effective means of distinguishing between them once published on the Web at large.
  • Natural language generation – (NLG), is a software process that transforms structured data into plain-English content. It can be used to produce long-form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out loud by a text-to-speech system.
  • Natural language processing – (NLP), is a subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.
  • Natural language programming – is an ontology-assisted way of programming in terms of natural-language sentences, e.g. English.[225]
  • Network motif – All networks, including biological networks, social networks, technological networks (e.g., computer networks and electrical circuits) and more, can be represented as graphs, which include a wide variety of subgraphs. One important local property of networks are so-called network motifs, which are defined as recurrent and statistically significant sub-graphs or patterns.
  • Neural machine translation – (NMT), is an approach to machine translation that uses a large artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.
  • Neural Turing machine – (NTMs) is a recurrent neural network model. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent.[226] An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone.[227]
  • Neuro-fuzzy – refers to combinations of artificial neural networks and fuzzy logic.
  • Neurocybernetics – A brain–computer interface (BCI), sometimes called a neural-control interface (NCI), mind-machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.[228]
  • Neuromorphic engineering – also known as neuromorphic computing,[229][230][231] is a concept describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system.[232] In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perceptionmotor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors,[233] spintronic memories,[234] threshold switches, and transistors.[235]
  • Node – is a basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers.
  • Nondeterministic algorithm – is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm.
  • Nouvelle AI – Nouvelle AI differs from classical AI by aiming to produce robots with intelligence levels similar to insects. Researchers believe that intelligence can emerge organically from simple behaviors as these intelligences interacted with the “real world,” instead of using the constructed worlds which symbolic AIs typically needed to have programmed into them.[236]
  • NP – In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is “yes”, have proofs verifiable in polynomial time.[237][Note 1]
  • NP-completeness – In computational complexity theory, a problem is NP-complete when it can be solved by a restricted class of brute force search algorithms and it can be used to simulate any other problem with a similar algorithm. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, whose validity can be tested quickly (in polynomial time[238]), such that the output for any input is “yes” if the solution set is non-empty and “no” if it is empty.
  • NP-hardness – (non-deterministic polynomial-time hardness), in computational complexity theory, is the defining property of a class of problems that are, informally, “at least as hard as the hardest problems in NP”. A simple example of an NP-hard problem is the subset sum problem.

O

P

  • Partial order reduction – is a technique for reducing the size of the state-space to be searched by a model checking or automated planning and scheduling algorithm. It exploits the commutativity of concurrently executed transitions, which result in the same state when executed in different orders.
  • Partially observable Markov decision process – (POMDP), is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a probability distribution over the set of possible states, based on a set of observations and observation probabilities, and the underlying MDP.
  • Particle swarm optimization – (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle’s position and velocity. Each particle’s movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
  • Pathfinding – or pathing, is the plotting, by a computer application, of the shortest route between two points. It is a more practical variant on solving mazes. This field of research is based heavily on Dijkstra’s algorithm for finding a shortest path on a weighted graph.
  • Pattern recognition – is concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories.[244]
  • Predicate logic – First-order logic—also known as predicate logic and first-order predicate calculus—is a collection of formal systems used in mathematicsphilosophylinguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form “there exists x such that x is Socrates and x is a man” and there exists is a quantifier while x is a variable.[174] This distinguishes it from propositional logic, which does not use quantifiers or relations;[245] in this sense, propositional logic is the foundation of first-order logic.
  • Predictive analytics – encompasses a variety of statistical techniques from data miningpredictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events.[246][247]
  • Principal component analysis – (PCA), is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component, in turn, has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors (each being a linear combination of the variables and containing n observations) are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.
  • Principle of rationality – (or rationality principle), was coined by Karl R. Popper in his Harvard Lecture of 1963, and published in his book Myth of Framework.[248] It is related to what he called the ‘logic of the situation’ in an Economica article of 1944/1945, published later in his book The Poverty of Historicism.[249] According to Popper’s rationality principle, agents act in the most adequate way according to the objective situation. It is an idealized conception of human behavior which he used to drive his model of situational analysis.
  • Probabilistic programming – (PP), is a programming paradigm in which probabilistic models are specified and inference for these models is performed automatically.[250] It represents an attempt to unify probabilistic modeling and traditional general-purpose programming in order to make the former easier and more widely applicable.[251][252] It can be used to create systems that help make decisions in the face of uncertainty. Programming languages used for probabilistic programming are referred to as “Probabilistic programming languages” (PPLs).
  • Production system –
  • Programming language – is a formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms.
  • Prolog – is a logic programming language associated with artificial intelligence and computational linguistics.[253][254][255] Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations.[256]
  • Propositional calculus – is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions (which can be true or false) and argument flow. Compound propositions are formed by connecting propositions by logical connectives. The propositions without logical connectives are called atomic propositions. Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.
  • Python – is an interpretedhigh-levelgeneral-purpose programming language. Created by Guido van Rossum and first released in 1991, Python’s design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.[257]

Q

R

S

T

U

V

W

X

Y

Z

See also

References

  1. Jump up to:a b For example: Josephson, John R.; Josephson, Susan G., eds. (1994). Abductive Inference: Computation, Philosophy, Technology. Cambridge, UK; New York: Cambridge University Press. doi:10.1017/CBO9780511530128ISBN 978-0521434614OCLC 28149683.
  2. ^ “Retroduction | Dictionary | Commens”Commens – Digital Companion to C. S. Peirce. Mats Bergman, Sami Paavola & João Queiroz. Retrieved 24 August 2014.
  3. ^ Colburn, Timothy; Shute, Gary (5 June 2007). “Abstraction in Computer Science”. Minds and Machines17 (2): 169–184. doi:10.1007/s11023-007-9061-7ISSN 0924-6495.
  4. ^ Kramer, Jeff (1 April 2007). “Is abstraction the key to computing?”. Communications of the ACM50 (4): 36–42. CiteSeerX 10.1.1.120.6776doi:10.1145/1232743.1232745ISSN 0001-0782.
  5. ^ Michael Gelfond, Vladimir Lifschitz (1998) “Action Languages“, Linköping Electronic Articles in Computer and Information Science, vol 3, nr 16.
  6. ^ Jang, Jyh-Shing R (1991). Fuzzy Modeling Using Generalized Neural Networks and Kalman Filter Algorithm (PDF). Proceedings of the 9th National Conference on Artificial Intelligence, Anaheim, CA, USA, July 14–19. 2. pp. 762–767.
  7. ^ Jang, J.-S.R. (1993). “ANFIS: adaptive-network-based fuzzy inference system”. IEEE Transactions on Systems, Man and Cybernetics23 (3): 665–685. doi:10.1109/21.256541.
  8. ^ Abraham, A. (2005), “Adaptation of Fuzzy Inference System Using Neural Learning”, in Nedjah, Nadia; de Macedo Mourelle, Luiza (eds.), Fuzzy Systems Engineering: Theory and Practice, Studies in Fuzziness and Soft Computing, 181, Germany: Springer Verlag, pp. 53–83, CiteSeerX 10.1.1.161.6135doi:10.1007/11339366_3ISBN 978-3-540-25322-8
  9. ^ Jang, Sun, Mizutani (1997) – Neuro-Fuzzy and Soft Computing – Prentice Hall, pp 335–368, ISBN 0-13-261066-3
  10. ^ Tahmasebi, P. (2012). “A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation”Computers & Geosciences42: 18–27. Bibcode:2012CG…..42…18Tdoi:10.1016/j.cageo.2012.02.004PMC 4268588PMID 25540468.
  11. ^ Tahmasebi, P. (2010). “Comparison of optimized neural network with fuzzy logic for ore grade estimation”Australian Journal of Basic and Applied Sciences4: 764–772.
  12. ^ Russell, S.J.; Norvig, P. (2002). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-790395-5.
  13. ^ Rana el Kaliouby (November–December 2017). “We Need Computers with Empathy”Technology Review120 (6). p. 8.
  14. ^ Tao, Jianhua; Tieniu Tan (2005). “Affective Computing: A Review”. Affective Computing and Intelligent InteractionLNCS 3784. Springer. pp. 981–995. doi:10.1007/11573548.
  15. ^ Comparison of Agent Architectures Archived August 27, 2008, at the Wayback Machine
  16. ^ “Intel unveils Movidius Compute Stick USB AI Accelerator”. 21 July 2017.
  17. ^ “Inspurs unveils GX4 AI Accelerator”. 21 June 2017.
  18. ^ Shapiro, Stuart C. (1992). Artificial Intelligence In Stuart C. Shapiro (Ed.), Encyclopedia of Artificial Intelligence (Second Edition, pp. 54–57). New York: John Wiley. (Section 4 is on “AI-Complete Tasks”.)
  19. ^ Solomonoff, R., “A Preliminary Report on a General Theory of Inductive Inference“, Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).
  20. ^ “Artificial intelligence: Google’s AlphaGo beats Go master Lee Se-dol”BBC News. 12 March 2016. Retrieved 17 March 2016.
  21. ^ “AlphaGo | DeepMind”DeepMind.
  22. ^ “Research Blog: AlphaGo: Mastering the ancient game of Go with Machine Learning”Google Research Blog. 27 January 2016.
  23. ^ “Google achieves AI ‘breakthrough’ by beating Go champion”BBC News. 27 January 2016.
  24. ^ See Dung (1995)
  25. ^ See Besnard and Hunter (2001)
  26. ^ see Bench-Capon (2002)
  27. ^ Definition of AI as the study of intelligent agents:
  28. ^ Russell & Norvig 2009, p. 2.
  29. ^ “AAAI Corporate Bylaws”.
  30. ^ “The Lengthy History of Augmented Reality”Huffington Post. 15 May 2016.
  31. ^ Schueffel, Patrick (2017). The Concise Fintech Compendium. Fribourg: School of Management Fribourg/Switzerland.
  32. ^ Ghallab, Malik; Nau, Dana S.; Traverso, Paolo (2004), Automated Planning: Theory and PracticeMorgan KaufmannISBN 978-1-55860-856-6
  33. ^ Kephart, J.O.; Chess, D.M. (2003), “The vision of autonomic computing”, Computer36: 41–52, CiteSeerX 10.1.1.70.613doi:10.1109/MC.2003.1160055
  34. ^ “Self-driving Uber car kills Arizona woman crossing street”Reuters. 20 March 2018 – via http://www.reuters.com.
  35. ^ Thrun, Sebastian (2010). “Toward Robotic Cars”. Communications of the ACM53 (4): 99–106. doi:10.1145/1721654.1721679.
  36. ^ Gehrig, Stefan K.; Stein, Fridtjof J. (1999). Dead reckoning and cartography using stereo vision for an automated car. IEEE/RSJ International Conference on Intelligent Robots and Systems. 3. Kyongju. pp. 1507–1512. doi:10.1109/IROS.1999.811692ISBN 0-7803-5184-3.
  37. ^ “Information Engineering Main/Home Page”http://www.robots.ox.ac.uk. Retrieved 3 October 2018.
  38. ^ Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016) Deep Learning. MIT Press. p. 196. ISBN 9780262035613
  39. ^ Nielsen, Michael A. (2015). “Chapter 6”Neural Networks and Deep Learning.
  40. ^ “Deep Networks: Overview – Ufldl”ufldl.stanford.edu. Retrieved 4 August 2017.
  41. ^ Mozer, M. C. (1995). “A Focused Backpropagation Algorithm for Temporal Pattern Recognition”. In Chauvin, Y.; Rumelhart, D. (eds.). Backpropagation: Theory, architectures, and applicationsResearchGate. Hillsdale, NJ: Lawrence Erlbaum Associates. pp. 137–169. Retrieved 21 August 2017.
  42. ^ Robinson, A. J. & Fallside, F. (1987). The utility driven dynamic error propagation network (Technical report). Cambridge University, Engineering Department. CUED/F-INFENG/TR.1.
  43. ^ Werbos, Paul J. (1988). “Generalization of backpropagation with application to a recurrent gas market model”Neural Networks1 (4): 339–356. doi:10.1016/0893-6080(88)90007-x.
  44. ^ Feigenbaum, Edward (1988). The Rise of the Expert Company. Times Books. p. 317ISBN 978-0-8129-1731-4.
  45. ^ Sivic, Josef (April 2009). “Efficient visual search of videos cast as text retrieval” (PDF)IEEE Transactions on Pattern Analysis and Machine Intelligence31 (4): 591–605. doi:10.1109/TPAMI.2008.111PMID 19229077.
  46. ^ McTear et al 2016, p. 167.
  47. ^ “Understanding the backward pass through Batch Normalization Layer”kratzert.github.io. Retrieved 24 April 2018.
  48. ^ Ioffe, Sergey; Szegedy, Christian (2015). “Batch Normalization: Accelerating Deep Network Training b y Reducing Internal Covariate Shift”. arXiv:1502.03167Bibcode:2015arXiv150203167I.
  49. ^ “Glossary of Deep Learning: Batch Normalisation”medium.com. 27 June 2017. Retrieved 24 April 2018.
  50. ^ “Batch normalization in Neural Networks”towardsdatascience.com. 20 October 2017. Retrieved 24 April 2018.
  51. ^ Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S and Zaidi M. The Bees Algorithm. Technical Note, Manufacturing Engineering Centre, Cardiff University, UK, 2005.
  52. ^ Pham, D.T., Castellani, M. (2009), The Bees Algorithm – Modelling Foraging Behaviour to Solve Continuous Optimisation Problems. Proc. ImechE, Part C, 223(12), 2919-2938.
  53. ^ Pham, D. T.; Castellani, M. (2014). “Benchmarking and comparison of nature-inspired population-based continuous optimisation algorithms”. Soft Computing18 (5): 871–903. doi:10.1007/s00500-013-1104-9.
  54. ^ Pham, Duc Truong; Castellani, Marco (2015). “A comparative study of the Bees Algorithm as a tool for function optimisation”. Cogent Engineering2doi:10.1080/23311916.2015.1091540.
  55. ^ Nasrinpour, H. R., Massah Bavani, A., Teshnehlab, M., (2017), Grouped Bees Algorithm: A Grouped Version of the Bees Algorithm, Computers 2017, 6(1), 5; (doi: 10.3390/computers6010005)
  56. ^ Cao, Longbing (2010). “In-depth Behavior Understanding and Use: the Behavior Informatics Approach”. Information Science180 (17): 3067–3085. doi:10.1016/j.ins.2010.03.025.
  57. ^ Colledanchise Michele, and Ögren Petter 2016. How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees. In IEEE Transactions on Robotics vol.PP, no.99, pp.1-18 (2016)
  58. ^ Colledanchise Michele, and Ögren Petter 2017. Behavior Trees in Robotics and AI: An Introduction.
  59. ^ Breur, Tom (July 2016). “Statistical Power Analysis and the contemporary “crisis” in social sciences”. Journal of Marketing Analytics4 (2–3): 61–65. doi:10.1057/s41270-016-0001-3ISSN 2050-3318.
  60. ^ Bachmann, Paul (1894). Analytische Zahlentheorie [Analytic Number Theory] (in German). 2. Leipzig: Teubner.
  61. ^ Landau, Edmund (1909). Handbuch der Lehre von der Verteilung der Primzahlen[Handbook on the theory of the distribution of the primes] (in German). Leipzig: B. G. Teubner. p. 883.
  62. ^ Rowan Garnier; John Taylor (2009). Discrete Mathematics: Proofs, Structures and Applications, Third Edition. CRC Press. p. 620. ISBN 978-1-4398-1280-8.
  63. ^ Steven S Skiena (2009). The Algorithm Design Manual. Springer Science & Business Media. p. 77. ISBN 978-1-84800-070-4.
  64. ^ Erman, L. D.; Hayes-Roth, F.; Lesser, V. R.; Reddy, D. R. (1980). “The Hearsay-II Speech-Understanding System: Integrating Knowledge to Resolve Uncertainty”. ACM Computing Surveys12 (2): 213. doi:10.1145/356810.356816.
  65. ^ Corkill, Daniel D. (September 1991). “Blackboard Systems” (PDF)AI Expert6 (9): 40–47.
  66. ^ Nii, H. Yenny (1986). Blackboard Systems (PDF) (Technical report). Department of Computer Science, Stanford University. STAN-CS-86-1123. Retrieved 12 April 2013.
  67. ^ Hayes-Roth, B. (1985). “A blackboard architecture for control”. Artificial Intelligence26(3): 251–321. doi:10.1016/0004-3702(85)90063-3.
  68. ^ Hinton, Geoffrey E. (24 May 2007). “Boltzmann machine”. Scholarpedia2 (5): 1668. Bibcode:2007SchpJ…2.1668Hdoi:10.4249/scholarpedia.1668ISSN 1941-6016.
  69. ^ NZZ- Die Zangengeburt eines möglichen Stammvaters. Website Neue Zürcher Zeitung. Seen 16. August 2013.
  70. ^ Official Homepage Roboy Archived 2013-08-03 at the Wayback Machine. Website Roboy. Seen 16. August 2013.
  71. ^ Official Homepage Starmind. Website Starmind. Seen 16. August 2013.
  72. ^ Sabour, Sara; Frosst, Nicholas; Hinton, Geoffrey E. (26 October 2017). “Dynamic Routing Between Capsules”. arXiv:1710.09829 [cs.CV].
  73. ^ “What is a chatbot?”techtarget.com. Retrieved 30 January 2017.
  74. ^ Civera, Javier; Ciocarlie, Matei; Aydemir, Alper; Bekris, Kostas; Sarma, Sanjay (2015). “Guest Editorial Special Issue on Cloud Robotics and Automation”. IEEE Transactions on Automation Science and Engineering12 (2): 396–397. doi:10.1109/TASE.2015.2409511.
  75. ^ “Robo Earth – Tech News”Robo Earth.
  76. ^ Goldberg, Ken. “Cloud Robotics and Automation”.
  77. ^ Li, R. “Cloud Robotics-Enable cloud computing for robots”. Retrieved 7 December2014.
  78. ^ Fisher, Douglas (1987). “Knowledge acquisition via incremental conceptual clustering”(PDF)Machine Learning2 (2): 139–172. doi:10.1007/BF00114265.
  79. ^ Fisher, Douglas H. (July 1987). “Improving inference through conceptual clustering”. Proceedings of the 1987 AAAI Conferences. AAAI Conference. Seattle Washington. pp. 461–465.
  80. ^ William Iba and Pat Langley (27 January 2011). “Cobweb models of categorization and probabilistic concept formation”. In Emmanuel M. Pothos and Andy J. Wills (ed.). Formal approaches in categorization. Cambridge: Cambridge University Press. pp. 253–273. ISBN 9780521190480.
  81. ^ Refer to the ICT website: http://cogarch.ict.usc.edu/
  82. ^ “Hewlett Packard Labs”.
  83. ^ Terdiman, Daniel (2014) .IBM’s TrueNorth processor mimics the human brain.http://www.cnet.com/news/ibms-truenorth-processor-mimics-the-human-brain/
  84. ^ Knight, Shawn (2011). IBM unveils cognitive computing chips that mimic human brainTechSpot: August 18, 2011, 12:00 PM
  85. ^ Hamill, Jasper (2013). Cognitive computing: IBM unveils software for its brain-like SyNAPSE chips The Register: August 8, 2013
  86. ^ Denning. P.J. (2014). “Surfing Toward the Future”. Communications of the ACM57 (3): 26–29. doi:10.1145/2566967.
  87. ^ Dr. Lars Ludwig (2013). “Extended Artificial Memory. Toward an integral cognitive theory of memory and technology” (pdf). Technical University of Kaiserslautern. Retrieved 7 February 2017.
  88. ^ “Research at HP Labs”.
  89. ^ “Automate Complex Workflows Using Tactical Cognitive Computing: Coseer”thesiliconreview.com. Retrieved 31 July 2017.
  90. ^ Cognitive science is an interdisciplinary field of researchers from Linguistics, psychology, neuroscience, philosophy, computer science, and anthropology that seek to understand the mind. How We Learn: Ask the Cognitive Scientist
  91. ^ Schrijver, Alexander (February 1, 2006). A Course in Combinatorial Optimization (PDF), page 1.
  92. ^ HAYKIN, S. Neural Networks – A Comprehensive Foundation. Second edition. Pearson Prentice Hall: 1999.
  93. ^ “PROGRAMS WITH COMMON SENSE”www-formal.stanford.edu. Retrieved 11 April2018.
  94. ^ Ernest Davis; Gary Marcus (2015). “Commonsense reasoning”Communications of the ACM. Vol. 58 no. 9. pp. 92–103. doi:10.1145/2701413.
  95. ^ Hulstijn, J, and Nijholt, A. (eds.). Proceedings of the International Workshop on Computational Humor. Number 12 in Twente Workshops on Language Technology, Enschede, Netherlands. University of Twente, 1996.
  96. ^ “ACL – Association for Computational Learning”.
  97. ^ Trappenberg, Thomas P. (2002). Fundamentals of Computational Neuroscience. United States: Oxford University Press Inc. p. 1. ISBN 978-0-19-851582-1.
  98. ^ What is computational neuroscience? Patricia S. Churchland, Christof Koch, Terrence J. Sejnowski. in Computational Neuroscience pp.46-55. Edited by Eric L. Schwartz. 1993. MIT Press “Archived copy”. Archived from the original on 4 June 2011. Retrieved 11 June 2009.
  99. ^ Press, The MIT. “Theoretical Neuroscience”The MIT Press. Retrieved 24 May 2018.
  100. ^ Gerstner, W.; Kistler, W.; Naud, R.; Paninski, L. (2014). Neuronal Dynamics. Cambridge, UK: Cambridge University Press. ISBN 9781107447615.
  101. ^ Kamentsky, L.A., and Liu, C.-N. (1963). Computer-Automated Design of Multifont Print Recognition Logic, IBM Journal of Research and Development, 7(1), p.2
  102. ^ Brncick, M. (2000). Computer automated design and computer automated manufacture, Phys Med Rehabil Clin N Am, Aug, 11(3), 701-13.
  103. ^ Li, Y., et al. (2004). CAutoCSD – Evolutionary search and optimisation enabled computer automated control system design Archived 2015-08-31 at the Wayback Machine. International Journal of Automation and Computing, 1(1). 76-88. ISSN 1751-8520
  104. ^ KRAMER, GJE; GRIERSON, DE, (1989) COMPUTER AUTOMATED DESIGN OF STRUCTURES UNDER DYNAMIC LOADS, COMPUTERS & STRUCTURES, 32(2), 313-325
  105. ^ MOHARRAMI, H; GRIERSON, DE, 1993, COMPUTER-AUTOMATED DESIGN OF REINFORCED-CONCRETE FRAMEWORKS, JOURNAL OF STRUCTURAL ENGINEERING-ASCE, 119(7), 2036-2058
  106. ^ XU, L; GRIERSON, DE, (1993) COMPUTER-AUTOMATED DESIGN OF SEMIRIGID STEEL FRAMEWORKS, JOURNAL OF STRUCTURAL ENGINEERING-ASCE, 119(6), 1740-1760
  107. ^ Barsan, GM; Dinsoreanu, M, (1997). Computer-automated design based on structural performance criteria, Mouchel Centenary Conference on Innovation in Civil and Structural Engineering, AUG 19-21, CAMBRIDGE ENGLAND, INNOVATION IN CIVIL AND STRUCTURAL ENGINEERING, 167-172
  108. ^ Li, Y., et al. (1996). Genetic algorithm automated approach to the design of sliding mode control systems, Int J Control, 63(4), 721-739.
  109. ^ Li, Y., et al. (1995). Automation of Linear and Nonlinear Control Systems Design by Evolutionary Computation, Proc. IFAC Youth Automation Conf., Beijing, China, August 1995, 53-58.
  110. ^ Barsan, GM, (1995) Computer-automated design of semirigid steel frameworks according to EUROCODE-3, Nordic Steel Construction Conference 95, JUN 19-21, 787-794
  111. ^ Gary J. Gray, David J. Murray-Smith, Yun Li, et al. (1998). Nonlinear model structure identification using genetic programming, Control Engineering Practice 6 (1998) 1341—1352
  112. ^ Zhan, Z.H., et al. (2011). Evolutionary computation meets machine learning: a survey, IEEE Computational Intelligence Magazine, 6(4), 68-75.
  113. ^ Gregory S. Hornby (2003). Generative Representations for Computer-Automated Design Systems, NASA Ames Research Center, Mail Stop 269-3, Moffett Field, CA 94035-1000
  114. ^ J. Clune and H. Lipson (2011). Evolving three-dimensional objects with a generative encoding inspired by developmental biology. Proceedings of the European Conference on Artificial Life. 2011.
  115. ^ Zhan, Z.H., et al. (2009). Adaptive Particle Swarm Optimization, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), Vol.39, No.6. 1362-1381
  116. ^ “WordNet Search—3.1”. Wordnetweb.princeton.edu. Retrieved 14 May 2012.
  117. ^ Dana H. Ballard; Christopher M. Brown (1982). Computer Vision. Prentice Hall. ISBN 0-13-165316-4.
  118. ^ Huang, T. (1996-11-19). Vandoni, Carlo, E, ed. Computer Vision : Evolution And Promise (PDF). 19th CERN School of Computing. Geneva: CERN. pp. 21–25. doi:10.5170/CERN-1996-008.21. ISBN 978-9290830955.
  119. ^ Milan Sonka; Vaclav Hlavac; Roger Boyle (2008). Image Processing, Analysis, and Machine Vision. Thomson. ISBN 0-495-08252-X.
  120. ^ Garson, James (27 November 2018). Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University – via Stanford Encyclopedia of Philosophy.
  121. ^ “Ishtar for Belgium to Belgrade”. European Broadcasting Union. Retrieved 19 May2013.
  122. ^ LeCun, Yann. “LeNet-5, convolutional neural networks”. Retrieved 16 November 2013.
  123. ^ Zhang, Wei (1988). “Shift-invariant pattern recognition neural network and its optical architecture”. Proceedings of annual conference of the Japan Society of Applied Physics.
  124. ^ Zhang, Wei (1990). “Parallel distributed processing model with local space-invariant interconnections and its optical architecture”. Applied Optics. 29 (32): 4790–7. Bibcode:1990ApOpt..29.4790Z. doi:10.1364/AO.29.004790. PMID 20577468.,
  125. ^ Tian, Yuandong; Zhu, Yan (2015). “Better Computer Go Player with Neural Network and Long-term Prediction”. arXiv:1511.06410v1 [cs.LG].
  126. ^ “How Facebook’s AI Researchers Built a Game-Changing Go Engine”MIT Technology Review. 4 December 2015. Retrieved 3 February 2016.
  127. ^ “Facebook AI Go Player Gets Smarter With Neural Network And Long-Term Prediction To Master World’s Hardest Game”Tech Times. 28 January 2016. Retrieved 24 April 2016.
  128. ^ “Facebook’s artificially intelligent Go player is getting smarter”VentureBeat. 27 January 2016. Retrieved 24 April 2016.
  129. ^ Solomonoff, R.J.The Time Scale of Artificial Intelligence; Reflections on Social Effects, Human Systems Management, Vol 5 1985, Pp 149-153
  130. ^ Moor, J., The Dartmouth College Artificial Intelligence Conference: The Next Fifty years, AI Magazine, Vol 27, No., 4, Pp. 87-9, 2006
  131. ^ Kline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing, October–December, 2011, IEEE Computer Society
  132. Jump up to:a b Haghighat, Mohammad; Abdel-Mottaleb, Mohamed; Alhalabi, Wadee (2016). “Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition”IEEE Transactions on Information Forensics and Security11 (9): 1984–1996. doi:10.1109/TIFS.2016.2569061.
  133. ^ Maurizio Lenzerini (2002). “Data Integration: A Theoretical Perspective” (PDF)PODS 2002. pp. 233–246.
  134. ^ Frederick Lane (2006). “IDC: World Created 161 Billion Gigs of Data in 2006”.
  135. ^ Dhar, V. (2013). “Data science and prediction”Communications of the ACM56 (12): 64–73. doi:10.1145/2500499.
  136. ^ Jeff Leek (12 December 2013). “The key word in “Data Science” is not Data, it is Science”. Simply Statistics.
  137. ^ Hayashi, Chikio (1 January 1998). “What is Data Science? Fundamental Concepts and a Heuristic Example”. In Hayashi, Chikio; Yajima, Keiji; Bock, Hans-Hermann; Ohsumi, Noboru; Tanaka, Yutaka; Baba, Yasumasa (eds.). Data Science, Classification, and Related Methods. Studies in Classification, Data Analysis, and Knowledge Organization. Springer Japan. pp. 40–51. doi:10.1007/978-4-431-65950-1_3ISBN 9784431702085.
  138. ^ Dedić, Nedim; Stanier, Clare (2016). Hammoudi, Slimane; Maciaszek, Leszek; Missikoff, Michele M. Missikoff; Camp, Olivier; Cordeiro, José (eds.). An Evaluation of the Challenges of Multilingualism in Data Warehouse DevelopmentInternational Conference on Enterprise Information Systems, 25–28 April 2016, Rome, Italy (PDF)Proceedings of the 18th International Conference on Enterprise Information Systems (ICEIS 2016)1. SciTePress. pp. 196–206. doi:10.5220/0005858401960206ISBN 978-989-758-187-8.
  139. ^ “9 Reasons Data Warehouse Projects Fail”. blog.rjmetrics.com. 4 December 2014. Retrieved 30 April 2017.
  140. ^ Huang, Green, and Loo, “Datalog and Emerging applications”, SIGMOD 2011 (PDF), UC Davis.
  141. ^ Steele, Katie and Stefánsson, H. Orri, “Decision Theory”, The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), Edward N. Zalta (ed.), URL = [1]
  142. ^ Lloyd, J.W., Practical Advantages of Declarative Programming
  143. ^ Bengio, Y.; Courville, A.; Vincent, P. (2013). “Representation Learning: A Review and New Perspectives”. IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (8): 1798–1828. arXiv:1206.5538. doi:10.1109/tpami.2013.50.
  144. ^ Schmidhuber, J. (2015). “Deep Learning in Neural Networks: An Overview”. Neural Networks. 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. PMID 25462637.
  145. ^ Bengio, Yoshua; LeCun, Yann; Hinton, Geoffrey (2015). “Deep Learning”. Nature. 521 (7553): 436–444. Bibcode:2015Natur.521..436L. doi:10.1038/nature14539. PMID 26017442.
  146. ^ “About Us | DeepMind”DeepMind.
  147. ^ “A return to Paris | DeepMind”DeepMind.
  148. ^ “The Last AI Breakthrough DeepMind Made Before Google Bought It”. The Physics arXiv Blog. 29 January 2014. Retrieved 12 October 2014.
  149. ^ Graves, Alex; Wayne, Greg; Danihelka, Ivo (2014). “Neural Turing Machines”. arXiv:1410.5401 [cs.NE].
  150. ^ Best of 2014: Google’s Secretive DeepMind Startup Unveils a “Neural Turing Machine”MIT Technology Review
  151. ^ Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago (12 October 2016). “Hybrid computing using a neural network with dynamic external memory”. Nature538 (7626): 471–476. Bibcode:2016Natur.538..471Gdoi:10.1038/nature20101ISSN 1476-4687PMID 27732574.
  152. ^ Kohs, Greg (29 September 2017), AlphaGo, Ioannis Antonoglou, Lucas Baker, Nick Bostrom, retrieved 9 January 2018
  153. ^ Silver, David; Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent; Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy; Simonyan, Karen; Hassabis, Demis (5 December 2017). “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm”. arXiv:1712.01815[cs.AI].
  154. ^ Sikos, Leslie F. (2017). Description Logics in Multimedia Reasoning. Cham: Springer International Publishing. doi:10.1007/978-3-319-54066-5ISBN 978-3-319-54066-5.
  155. ^ Roweis, S. T.; Saul, L. K. (2000). “Nonlinear Dimensionality Reduction by Locally Linear Embedding”. Science290 (5500): 2323–2326. Bibcode:2000Sci…290.2323RCiteSeerX 10.1.1.111.3313doi:10.1126/science.290.5500.2323PMID 11125150.
  156. ^ Pudil, P.; Novovičová, J. (1998). “Novel Methods for Feature Subset Selection with Respect to Problem Knowledge”. In Liu, Huan; Motoda, Hiroshi (eds.). Feature Extraction, Construction and Selection. p. 101. doi:10.1007/978-1-4615-5725-8_7ISBN 978-1-4613-7622-4.
  157. ^ Demazeau, Yves, and J-P. Müller, eds. Decentralized Ai. Vol. 2. Elsevier, 1990.
  158. ^ Hendrickx, Iris; Van den Bosch, Antal (October 2005). “Hybrid algorithms with Instance-Based Classification”Machine Learning: ECML2005. Springer. pp. 158–169.
  159. Jump up to:a b Adam Ostrow (5 March 2011). “Roger Ebert’s Inspiring Digital Transformation”. Mashable Entertainment. Retrieved 12 September 2011With the help of his wife, two colleagues and the Alex-equipped MacBook that he uses to generate his computerized voice, famed film critic Roger Ebert delivered the final talk at the TED conference on Friday in Long Beach, California….
  160. ^ JENNIFER 8. LEE (7 March 2011). “Roger Ebert Tests His Vocal Cords, and Comedic Delivery”The New York Times. Retrieved 12 September 2011Now perhaps, there is the Ebert Test, a way to see if a synthesized voice can deliver humor with the timing to make an audience laugh…. He proposed the Ebert Test as a way to gauge the humanness of a synthesized voice.
  161. ^ “Roger Ebert’s Inspiring Digital Transformation”. Tech News. 5 March 2011. Retrieved 12 September 2011Meanwhile, the technology that enables Ebert to “speak” continues to see improvements – for example, adding more realistic inflection for question marks and exclamation points. In a test of that, which Ebert called the “Ebert test” for computerized voices,
  162. ^ Alex_Pasternack (18 April 2011). “A MacBook May Have Given Roger Ebert His Voice, But An iPod Saved His Life (Video)”. Motherboard. Retrieved 12 September 2011He calls it the “Ebert Test,” after Turing’s AI standard…
  163. ^ Herbert Jaeger and Harald Haas. Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science 2 April 2004: Vol. 304. no. 5667, pp. 78 – 80 doi:10.1126/science.1091277 PDF
  164. ^ Herbert Jaeger (2007) Echo State Network. Scholarpedia.
  165. ^ Serenko, Alexander; Bontis, Nick; Detlor, Brian (2007). “End-user adoption of animated interface agents in everyday work applications” (PDF)Behaviour and Information Technology26 (2): 119–132. doi:10.1080/01449290500260538.
  166. ^ Vikhar, P. A. “Evolutionary algorithms: A critical review and its future prospects”Proceedings of the 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC). Jalgaon, 2016, pp. 261-265. ISBN 978-1-5090-0467-6.
  167. ^ Russell, StuartNorvig, Peter (2009). “26.3: The Ethics and Risks of Developing Artificial Intelligence”. Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  168. ^ Bostrom, Nick (2002). “Existential risks”. Journal of Evolution and Technology9 (1): 1–31.
  169. ^ “Your Artificial Intelligence Cheat Sheet”Slate. 1 April 2016. Retrieved 16 May 2016.
  170. ^ Jackson, Peter (1998), Introduction To Expert Systems (3 ed.), Addison Wesley, p. 2, ISBN 978-0-201-87686-4
  171. ^ “Conventional programming”. Pcmag.com. Retrieved 15 September 2013.
  172. ^ Martignon, Laura; Vitouch, Oliver; Takezawa, Masanori; Forster, Malcolm. “Naive and Yet Enlightened: From Natural Frequencies to Fast and Frugal Decision Trees”, published in Thinking : Psychological perspectives on reasoning, judgement and decision making(David Hardman and Laura Macchi; editors), Chichester: John Wiley & Sons, 2003.
  173. ^ Y. Bengio; A. Courville; P. Vincent (2013). “Representation Learning: A Review and New Perspectives”. IEEE Transactions on Pattern Analysis and Machine Intelligence35 (8): 1798–1828. arXiv:1206.5538doi:10.1109/tpami.2013.50PMID 23787338.
  174. Jump up to:a b Hodgson, Dr. J. P. E., “First Order Logic”Saint Joseph’s UniversityPhiladelphia, 1995.
  175. ^ Hughes, G. E., & Cresswell, M. J.A New Introduction to Modal Logic (LondonRoutledge, 1996), p.161.
  176. ^ Feigenbaum, Edward (1988). The Rise of the Expert Company. Times Books. p. 318ISBN 978-0-8129-1731-4.
  177. ^ Hayes, Patrick. “The Frame Problem and Related Problems in Artificial Intelligence”(PDF)University of Edinburgh.
  178. ^ Sardar, Z. (2010) The Namesake: Futures; futures studies; futurology; futuristic; Foresight — What’s in a name? Futures, 42 (3), pp. 177–184.
  179. ^ Pedrycz, Witold (1993). Fuzzy control and fuzzy systems (2 ed.). Research Studies Press Ltd.
  180. ^ Hájek, Petr (1998). Metamathematics of fuzzy logic (4 ed.). Springer Science & Business Media.
  181. ^ D. Dubois and H. Prade (1988) Fuzzy Sets and Systems. Academic Press, New York.
  182. ^ Liang, Lily R.; Lu, Shiyong; Wang, Xuena; Lu, Yi; Mandal, Vinay; Patacsil, Dorrelyn; Kumar, Deepak (2006). “FM-test: A fuzzy-set-theory-based approach to differential gene expression data analysis”BMC Bioinformatics7: S7. doi:10.1186/1471-2105-7-S4-S7PMC 1780132PMID 17217525.
  183. ^ Myerson, Roger B. (1991). Game Theory: Analysis of Conflict, Harvard University Press, p. 1. Chapter-preview links, pp. vii–xi.
  184. ^ Mitchell 1996, p. 2.
  185. ^ Trudeau, Richard J. (1993). Introduction to Graph Theory (Corrected, enlarged republication. ed.). New York: Dover Pub. p. 19. ISBN 978-0-486-67870-2. Retrieved 8 August 2012A graph is an object consisting of two sets called its vertex set and its edge set.
  186. ^ Nikolaos G. Bourbakis (1998). Artificial Intelligence and Automation. World Scientific. p. 381. ISBN 9789810226374. Retrieved 20 April 2018.
  187. ^ Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young (March 2017). “Use of Graph Database for the Integration of Heterogeneous Biological Data”Genomics & Informatics15 (1): 19–27. doi:10.5808/GI.2017.15.1.19ISSN 1598-866XPMC 5389944PMID 28416946.
  188. ^ Pearl, Judea (1984). Heuristics: intelligent search strategies for computer problem solving. United States: Addison-Wesley Pub. Co., Inc., Reading, MA. p. 3. Bibcode:1985hiss.book…..POSTI 5127296.
  189. ^ E. K. Burke, E. Hart, G. Kendall, J. Newall, P. Ross, and S. Schulenburg, Hyper-heuristics: An emerging direction in modern search technology, Handbook of Metaheuristics (F. Glover and G. Kochenberger, eds.), Kluwer, 2003, pp. 457–474.
  190. ^ P. Ross, Hyper-heuristics, Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques (E. K. Burke and G. Kendall, eds.), Springer, 2005, pp. 529-556.
  191. ^ E. Ozcan, B. Bilgin, E. E. Korkmaz, A Comprehensive Analysis of Hyper-heuristics, Intelligent Data Analysis, 12:1, pp. 3-23, 2008.
  192. ^ “IEEE CIS Scope”.
  193. ^ “Control of Machining Processes – Purdue ME Manufacturing Laboratories”engineering.purdue.edu.
  194. ^ Hoy, Matthew B. (2018). “Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants”. Medical Reference Services Quarterly37 (1): 81–88. doi:10.1080/02763869.2018.1404391PMID 29327988.
  195. ^ Chevallier, Arnaud (2016). Strategic thinking in complex problem solving. Oxford; New York: Oxford University Pressdoi:10.1093/acprof:oso/9780190463908.001.0001ISBN 9780190463908OCLC 940455195.
  196. ^ “Strategy survival guide: Issue trees”interactive.cabinetoffice.gov.uk. London: Prime Minister’s Strategy Unit. July 2004. Archived from the original on 17 February 2012. Retrieved 6 October 2018. Also available in PDF format.
  197. ^ Paskin, Mark. “A Short Course on Graphical Models” (PDF)Standford.
  198. ^ Woods, W. A.; Schmolze, J. G. (1992). “The KL-ONE family”. Computers & Mathematics with Applications23 (2–5): 133. doi:10.1016/0898-1221(92)90139-9.
  199. ^ Brachman, R. J.; Schmolze, J. G. (1985). “An Overview of the KL-ONE Knowledge Representation System” (PDF)Cognitive Science9 (2): 171. doi:10.1207/s15516709cog0902_1.
  200. ^ D.A. Duce, G.A. Ringland (1988). Approaches to Knowledge Representation, An Introduction. Research Studies Press, Ltd. ISBN 978-0-86380-064-1.
  201. ^ Roger Schank; Robert Abelson (1977). Scripts, Plans, Goals, and Understanding: An Inquiry Into Human Knowledge Structures. Lawrence Erlbaum Associates, Inc.
  202. ^ “Knowledge Representation in Neural Networks – deepMinds”deepMinds. 16 August 2018. Retrieved 16 August 2018.
  203. ^ Edwin D. Reilly (2003). Milestones in computer science and information technology. Greenwood Publishing Group. pp. 156–157. ISBN 978-1-57356-521-9.
  204. ^ Sepp Hochreiter; Jürgen Schmidhuber (1997). “Long short-term memory”. Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID 9377276
  205. ^ Siegelmann, Hava T.; Sontag, Eduardo D. (1992). On the Computational Power of Neural NetsACM. COLT ’92. pp. 440–449. doi:10.1145/130385.130432ISBN 978-0897914970.
  206. ^ Gagniuc, Paul A. (2017). Markov Chains: From Theory to Implementation and Experimentation. USA, NJ: John Wiley & Sons. pp. 1–235. ISBN 978-1-119-38755-8.
  207. ^ “Markov chain | Definition of Markov chain in US English by Oxford Dictionaries”Oxford Dictionaries | English. Retrieved 14 December 2017.
  208. ^ Definition at Brilliant.org “Brilliant Math and Science Wiki”. Retrieved on 12 May 2019
  209. ^ The Nature of Mathematical Programming Archived 2014-03-05 at the Wayback Machine,” Mathematical Programming Glossary, INFORMS Computing Society.
  210. ^ Wang, Wenwu (1 July 2010). Machine Audition: Principles, Algorithms and Systems. IGI Global. ISBN 9781615209194 – via http://www.igi-global.com.
  211. ^ “Machine Audition: Principles, Algorithms and Systems” (PDF).
  212. ^ Malcolm Tatum (October 3, 2012). “What is Machine Perception”.
  213. ^ Alexander Serov (January 29, 2013). “Subjective Reality and Strong Artificial Intelligence” (PDF).
  214. ^ “Machine Perception & Cognitive Robotics Laboratory”http://www.ccs.fau.edu. Retrieved 18 June 2016.
  215. ^ Mechanical and Mechatronics Engineering Department. “What is Mechatronics Engineering?”Prospective Student Information. University of Waterloo. Retrieved 30 May 2011.
  216. ^ Faculty of Mechatronics, Informatics and Interdisciplinary Studies TUL. “Mechatronics (Bc., Ing., PhD.)”. Retrieved 15 April 2011.
  217. ^ Franke; Siezen, Teusink (2005). “Reconstructing the metabolic network of a bacterium from its genome”. Trends in Microbiology13 (11): 550–558. doi:10.1016/j.tim.2005.09.001PMID 16169729.
  218. ^ R. Balamurugan; A.M. Natarajan; K. Premalatha (2015). “Stellar-Mass Black Hole Optimization for Biclustering Microarray Gene Expression Data”. Applied Artificial Intelligence an International Journal. 29 (4): 353–381. doi:10.1080/08839514.2015.1016391.
  219. ^ Bianchi, Leonora; Marco Dorigo; Luca Maria Gambardella; Walter J. Gutjahr (2009). “A survey on metaheuristics for stochastic combinatorial optimization”. Natural Computing. 8 (2): 239–287. doi:10.1007/s11047-008-9098-4.
  220. ^ Herbert B. Enderton, 2001, A Mathematical Introduction to Logic Second Edition Enderton:110, Harcourt Academic Press, Burlington MA, ISBN 978-0-12-238452-3.
  221. ^ Naive Semantics to Support Automated Database Design“, IEEE Transactions on Knowledge and Data Engineering, Volume 14, issue 1 (January 2002) by V. C. Storey, R. C. Goldstein and H. Ullrich
  222. ^ Microsoft (11 May 2007), Using early binding and late binding in Automation, Microsoft, retrieved 11 May 2009
  223. ^ strictly speaking a URIRef
  224. ^ http://www.w3.org/TR/PR-rdf-syntax/ “Resource Description Framework (RDF) Model and Syntax Specification”
  225. ^ Miller, Lance A. “Natural language programming: Styles, strategies, and contrasts.” IBM Systems Journal 20.2 (1981): 184–215.
  226. ^ “Deep Minds: An Interview with Google’s Alex Graves & Koray Kavukcuoglu”. Retrieved 17 May 2016.
  227. ^ Graves, Alex; Wayne, Greg; Danihelka, Ivo (2014). “Neural Turing Machines”. arXiv:1410.5401 [cs.NE].
  228. ^ Krucoff, Max O.; Rahimpour, Shervin; Slutzky, Marc W.; Edgerton, V. Reggie; Turner, Dennis A. (1 January 2016). “Enhancing Nervous System Recovery through Neurobiologics, Neural Interface Training, and Neurorehabilitation”Frontiers in Neuroscience10: 584. doi:10.3389/fnins.2016.00584PMC 5186786PMID 28082858.
  229. ^ Monroe, D. (2014). “Neuromorphic computing gets ready for the (really) big time”. Communications of the ACM57 (6): 13–15. doi:10.1145/2601069.
  230. ^ Zhao, W. S.; Agnus, G.; Derycke, V.; Filoramo, A.; Bourgoin, J. -P.; Gamrat, C. (2010). “Nanotube devices based crossbar architecture: Toward neuromorphic computing”. Nanotechnology21 (17): 175202. Bibcode:2010Nanot..21q5202Zdoi:10.1088/0957-4484/21/17/175202PMID 20368686.
  231. ^ The Human Brain Project SP 9: Neuromorphic Computing Platform on YouTube
  232. ^ Mead, Carver (1990). “Neuromorphic electronic systems” (PDF)Proceedings of the IEEE78 (10): 1629–1636. doi:10.1109/5.58356.
  233. ^ Maan, A. K.; Jayadevi, D. A.; James, A. P. (1 January 2016). “A Survey of Memristive Threshold Logic Circuits”. IEEE Transactions on Neural Networks and Learning SystemsPP (99): 1734–1746. arXiv:1604.07121Bibcode:2016arXiv160407121Mdoi:10.1109/TNNLS.2016.2547842ISSN 2162-237XPMID 27164608.
  234. ^ A Survey of Spintronic Architectures for Processing-in-Memory and Neural Networks“, JSA, 2018
  235. ^ Zhou, You; Ramanathan, S. (1 August 2015). “Mott Memory and Neuromorphic Devices”. Proceedings of the IEEE103 (8): 1289–1310. doi:10.1109/JPROC.2015.2431914ISSN 0018-9219.
  236. ^ Copeland, Jack (May 2000). “What is Artificial Intelligence?”AlanTuring.net. Retrieved 7 November 2015.
  237. ^ Kleinberg, Jon; Tardos, Éva (2006). Algorithm Design (2nd ed.). Addison-Wesley. p. 464ISBN 0-321-37291-3.
  238. ^ Cobham, Alan (1965). “The intrinsic computational difficulty of functions”. Proc. Logic, Methodology, and Philosophy of Science II. North Holland.
  239. ^ “What is Occam’s Razor?”math.ucr.edu. Retrieved 1 June 2019.
  240. ^ “OpenAI shifts from nonprofit to ‘capped-profit’ to attract capital”. TechCrunch. Retrieved 2019-05-10.
  241. ^ “OpenCog: Open-Source Artificial General Intelligence for Virtual Worlds | CyberTech News”. 6 March 2009. Archived from the original on 6 March 2009. Retrieved 1 October2016.
  242. ^ St. Laurent, Andrew M. (2008). Understanding Open Source and Free Software Licensing. O’Reilly Media. p. 4. ISBN 9780596553951.
  243. ^ Levine, Sheen S.; Prietula, Michael J. (30 December 2013). “Open Collaboration for Innovation: Principles and Performance”. Organization Science25 (5): 1414–1433. arXiv:1406.7541doi:10.1287/orsc.2013.0872ISSN 1047-7039.
  244. ^ Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning (PDF). Springer. p. vii. Pattern recognition has its origins in engineering, whereas machine learning grew out of computer science. However, these activities can be viewed as two facets of the same field, and together they have undergone substantial development over the past ten years.
  245. ^ Hughes, G. E., & Cresswell, M. J.A New Introduction to Modal Logic (LondonRoutledge, 1996), p.161.
  246. ^ Nyce, Charles (2007), Predictive Analytics White Paper (PDF), American Institute for Chartered Property Casualty Underwriters/Insurance Institute of America, p. 1
  247. ^ Eckerson, Wayne (10 May 2007), Extending the Value of Your Data Warehousing Investment, The Data Warehouse Institute
  248. ^ Karl R. PopperThe Myth of Framework, London (Routledge) 1994, chap. 8.
  249. ^ Karl R. PopperThe Poverty of Historicism, London (Routledge) 1960, chap. iv, sect. 31.
  250. ^ “Probabilistic programming does in 50 lines of code what used to take thousands”phys.org. 13 April 2015. Retrieved 13 April 2015.
  251. ^ “Probabilistic Programming”probabilistic-programming.org.
  252. ^ Pfeffer, Avrom (2014), Practical Probabilistic Programming, Manning Publications. p.28. ISBN 978-1 6172-9233-0
  253. ^ Clocksin, William F.; Mellish, Christopher S. (2003). Programming in Prolog. Berlin ; New York: Springer-Verlag. ISBN 978-3-540-00678-7.
  254. ^ Bratko, Ivan (2012). Prolog programming for artificial intelligence (4th ed.). Harlow, England ; New York: Addison Wesley. ISBN 978-0-321-41746-6.
  255. ^ Covington, Michael A. (1994). Natural language processing for Prolog programmers. Englewood Cliffs, N.J.: Prentice Hall. ISBN 978-0-13-629213-5.
  256. ^ Lloyd, J. W. (1984). Foundations of logic programming. Berlin: Springer-Verlag. ISBN 978-3-540-13299-8.
  257. ^ Kuhlman, Dave. “A Python Book: Beginning Python, Advanced Python, and Python Exercises”. Section 1.1. Archived from the original (PDF) on 23 June 2012.
  258. Jump up to:a b Reiter, Raymond (2001). Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems. Cambridge, Massachusetts: The MIT Press. pp. 20–22. ISBN 9780262527002.
  259. ^ Thielscher, Michael (September 2001). “The Qualification Problem: A solution to the problem of anomalous models”. Artificial Intelligence131 (1–2): 1–37. doi:10.1016/S0004-3702(01)00131-X.
  260. ^ The National Academies of Sciences, Engineering, and Medicine (2019). Grumbling, Emily; Horowitz, Mark (eds.). Quantum Computing : Progress and Prospects (2018). Washington, DC: National Academies Press. p. I-5. doi:10.17226/25196ISBN 978-0-309-47969-1OCLC 1081001288.
  261. ^ R language and environment
    • Hornik, Kurt (4 October 2017). “R FAQ”The Comprehensive R Archive Network. 2.1 What is R?. Retrieved 6 August 2018.

    R Foundation

    • Hornik, Kurt (4 October 2017). “R FAQ”The Comprehensive R Archive Network. 2.13 What is the R Foundation?. Retrieved 6 August 2018.

    The R Core Team asks authors who use R in their data analysis to cite the software using:

    • R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org/.
  262. ^ widely used
  263. ^ Vance, Ashlee (6 January 2009). “Data Analysts Captivated by R’s Power”New York Times. Retrieved 6 August 2018R is also the name of a popular programming language used by a growing number of data analysts inside corporations and academia. It is becoming their lingua franca…
  264. ^ Broomhead, D. S.; Lowe, David (1988). Radial basis functions, multi-variable functional interpolation and adaptive networks (Technical report). RSRE. 4148.
  265. ^ Broomhead, D. S.; Lowe, David (1988). “Multivariable functional interpolation and adaptive networks” (PDF)Complex Systems2: 321–355.
  266. ^ Schwenker, Friedhelm; Kestler, Hans A.; Palm, Günther (2001). “Three learning phases for radial-basis-function networks”. Neural Networks. 14 (4–5): 439–458. CiteSeerX 10.1.1.109.312. doi:10.1016/s0893-6080(01)00027-2.
  267. ^ Ho, Tin Kam (1995). Random Decision Forests (PDF). Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, 14–16 August 1995. pp. 278–282. Archived from the original (PDF) on 17 April 2016. Retrieved 5 June 2016.
  268. ^ Ho TK (1998). “The Random Subspace Method for Constructing Decision Forests” (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 20 (8): 832–844. doi:10.1109/34.709601.
  269. ^ Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome(2008). The Elements of Statistical Learning (2nd ed.). Springer. ISBN 0-387-95284-5.
  270. ^ Graves, A.; Liwicki, M.; Fernandez, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. (2009). “A Novel Connectionist System for Improved Unconstrained Handwriting Recognition”(PDF)IEEE Transactions on Pattern Analysis and Machine Intelligence31 (5): 855–868. CiteSeerX 10.1.1.139.4502doi:10.1109/tpami.2008.137PMID 19299860.
  271. ^ Sak, Hasim; Senior, Andrew; Beaufays, Francoise (2014). “Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling” (PDF).
  272. ^ Li, Xiangang; Wu, Xihong (15 October 2014). “Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition”. arXiv:1410.4281 [cs.CL].
  273. ^ Kaelbling, Leslie P.Littman, Michael L.Moore, Andrew W. (1996). “Reinforcement Learning: A Survey”Journal of Artificial Intelligence Research4: 237–285. arXiv:cs/9605103doi:10.1613/jair.301. Archived from the original on 20 November 2001.
  274. ^ Schrauwen, BenjaminDavid Verstraeten, and Jan Van Campenhout. “An overview of reservoir computing: theory, applications, and implementations.” Proceedings of the European Symposium on Artificial Neural Networks ESANN 2007, pp. 471-482.
  275. ^ Mass, Wolfgang, T. Nachtschlaeger, and H. Markram. “Real-time computing without stable states: A new framework for neural computation based on perturbations.” Neural Computation 14(11): 2531–2560 (2002).
  276. ^ Jaeger, Herbert, “The echo state approach to analyzing and training recurrent neural networks.” Technical Report 154 (2001), German National Research Center for Information Technology.
  277. ^ Echo state network, Scholarpedia
  278. ^ “XML and Semantic Web W3C Standards Timeline” (PDF). 4 February 2012.
  279. ^ See, for example, Boolos and Jeffrey, 1974, chapter 11.
  280. ^ John F. Sowa (1987). “Semantic Networks”. In Stuart C Shapiro (ed.). Encyclopedia of Artificial Intelligence. Retrieved 29 April 2008.
  281. ^ O’Hearn, P. W.; Pym, D. J. (June 1999). “The Logic of Bunched Implications”. Bulletin of Symbolic Logic5 (2): 215–244. CiteSeerX 10.1.1.27.4742doi:10.2307/421090JSTOR 421090.
  282. ^ Abran et al. 2004, pp. 1–1
  283. ^ ACM (2007). “Computing Degrees & Careers”. ACM. Retrieved 23 November 2010.
  284. ^ Laplante, Phillip (2007). What Every Engineer Should Know about Software Engineering. Boca Raton: CRC. ISBN 978-0-8493-7228-5. Retrieved 21 January 2011.
  285. ^ Jim Rapoza (2 May 2006). “SPARQL Will Make the Web Shine”eWeek. Retrieved 17 January 2007.
  286. ^ Segaran, Toby; Evans, Colin; Taylor, Jamie (2009). Programming the Semantic Web. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. p. 84. ISBN 978-0-596-15381-6.
  287. ^ Maass, Wolfgang (1997). “Networks of spiking neurons: The third generation of neural network models”. Neural Networks10 (9): 1659–1671. doi:10.1016/S0893-6080(97)00011-7ISSN 0893-6080.
  288. ^ “What is stateless? – Definition from WhatIs.com”techtarget.com.
  289. ^ Lise Getoor and Ben TaskarIntroduction to statistical relational learning, MIT Press, 2007
  290. ^ Ryan A. Rossi, Luke K. McDowell, David W. Aha, and Jennifer Neville, “Transforming Graph Data for Statistical Relational Learning.” Journal of Artificial Intelligence Research (JAIR)Volume 45 (2012), pp. 363-441.
  291. ^ Spall, J. C. (2003). Introduction to Stochastic Search and Optimization. Wiley. ISBN 978-0-471-33052-3.
  292. ^ Language Understanding Using Two-Level Stochastic Models by F. Pla, et al, 2001, Springer Lecture Notes in Computer Science ISBN 978-3-540-42557-1
  293. ^ Stuart J. Russell, Peter Norvig (2010) Artificial Intelligence: A Modern Approach, Third Edition, Prentice Hall ISBN 9780136042594.
  294. ^ Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012) Foundations of Machine Learning, The MIT Press ISBN 9780262018258.
  295. ^ Cortes, Corinna; Vapnik, Vladimir N. (1995). “Support-vector networks” (PDF). Machine Learning. 20 (3): 273–297. CiteSeerX 10.1.1.15.9362. doi:10.1007/BF00994018.
  296. ^ Beni, G., Wang, J. (1993). “Swarm Intelligence in Cellular Robotic Systems”. Proceed. NATO Advanced Workshop on Robots and Biological Systems, Tuscany, Italy, June 26–30 (1989). pp. 703–712. doi:10.1007/978-3-642-58069-7_38ISBN 978-3-642-63461-1.
  297. ^ Haugeland 1985, p. 255.
  298. ^ Poole, Mackworth & Goebbel 1998, p. 1.
  299. ^ Cadwalladr, Carole (2014). “Are the robots about to rise? Google’s new director of engineering thinks so…” The Guardian. Guardian News and Media Limited.
  300. ^ “Collection of sources defining “singularitysingularitysymposium.com. Retrieved 17 April 2019.
  301. ^ Eden, Amnon H.; Moor, James H. (2012). Singularity hypotheses: A Scientific and Philosophical Assessment. Dordrecht: Springer. pp. 1–2. ISBN 9783642325601.
  302. ^ Richard Sutton & Andrew Barto (1998). Reinforcement Learning. MIT Press. ISBN 978-0-585-02445-5. Archived from the original on 30 March 2017.
  303. ^ Pellionisz, A., Llinás, R. (1980). “Tensorial Approach To The Geometry Of Brain Function: Cerebellar Coordination Via A Metric Tensor” (PDF)Neuroscience5 (7): 1125––1136. doi:10.1016/0306-4522(80)90191-8PMID 6967569.
  304. ^ Pellionisz, A., Llinás, R. (1985). “Tensor Network Theory Of The Metaorganization Of Functional Geometries In The Central Nervous System”. Neuroscience16 (2): 245–273. doi:10.1016/0306-4522(85)90001-6PMID 4080158.
  305. ^ “TensorFlow: Open source machine learning” “It is machine learning software being used for various kinds of perceptual and language understanding tasks” — Jeffrey Dean, minute 0:47 / 2:17 from YouTube clip
  306. ^ Michael Sipser (2013). Introduction to the Theory of Computation 3rd. Cengage Learning. ISBN 978-1-133-18779-0central areas of the theory of computation: automata, computability, and complexity. (Page 1)
  307. ^ Thompson, William R. “On the likelihood that one unknown probability exceeds another in view of the evidence of two samples”. Biometrika, 25(3–4):285–294, 1933.
  308. ^ Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband and Zheng Wen (2018), “A Tutorial on Thompson Sampling”, Foundations and Trends in Machine Learning: Vol. 11: No. 1, pp 1-96.
  309. ^ Mercer, Calvin. Religion and Transhumanism: The Unknown Future of Human Enhancement. Praeger.
  310. ^ Bostrom, Nick (2005). “A history of transhumanist thought” (PDF)Journal of Evolution and Technology. Retrieved 21 February 2006.
  311. ^ Turing originally suggested a teleprinter, one of the few text-only communication systems available in 1950. (Turing 1950, p. 433)
  312. ^ Pierce 2002, p. 1: “A type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute.”
  313. ^ Cardelli 2004, p. 1: “The fundamental purpose of a type system is to prevent the occurrence of execution errors during the running of a program.”
  314. ^ Hinton, Jeffrey; Sejnowski, Terrence (1999). Unsupervised Learning: Foundations of Neural Computation. MIT Press. ISBN 978-0262581684.
  315. ^ Seth Colaner; Matthew Humrick (3 January 2016). “A third type of processor for AR/VR: Movidius’ Myriad 2 VPU”Tom’s Hardware.
  316. ^ Prasid Banerje (28 March 2016). “The rise of VPUs: Giving Eyes to Machines”Digit.in.
  317. ^ “DeepQA Project: FAQ”IBM. Retrieved 11 February 2011.
  318. ^ Ferrucci, David; Levas, Anthony; Bagchi, Sugato; Gondek, David; Mueller, Erik T. (1 June 2013). “Watson: Beyond Jeopardy!”. Artificial Intelligence199: 93–105. doi:10.1016/j.artint.2012.06.009.
  319. ^ Hale, Mike (8 February 2011). “Actors and Their Roles for $300, HAL? HAL!”The New York Times. Retrieved 11 February 2011.
  320. ^ “The DeepQA Project”IBM Research. Retrieved 18 February 2011.
  321. ^ io9.com mentions narrow AI. Published 1 April 2013, retrieved 16 February 2014: http://io9.com/how-much-longer-before-our-first-ai-catastrophe-464043243
  322. ^ AI researcher Ben Goertzel explains why he became interested in AGI instead of narrow AI. Published 18 Oct 2013. Retrieved 16 February 2014. http://intelligence.org/2013/10/18/ben-goertzel/
  323. ^ TechCrunch discusses AI App building regarding Narrow AI. Published 16 Oct 2015, retrieved 17 Oct 2015. https://techcrunch.com/2015/10/15/machine-learning-its-the-hard-problems-that-are-valuable/

Notes

  1. ^ polynomial time refers to how quickly the number of operations needed by an algorithm, relative to the size of the problem, grows. It is therefore a measure of efficiency of an algorithm.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s