top of page

What is AI ?

Artificial intelligence (AI) is a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more.


Applications and devices equipped with AI can see and identify objects. They can understand and respond to human language. They can learn from new information and experience. They can make detailed recommendations to users and experts. They can act independently, replacing the need for human intelligence or intervention (a classic example being a self-driving car).


Stages of AI development


Artificial intelligence can be organized in several ways, depending on stages of development or actions being performed.


For instance, four stages of AI development are commonly recognized.


1. Reactive machines: Limited AI that only reacts to different kinds of stimuli based on preprogrammed rules. Does not use memory and thus cannot learn with new data. IBM’s Deep Blue that beat chess champion Garry Kasparov in 1997 was an example of a reactive machine.


2. Limited memory: Most modern AI is considered to be limited memory. It can use memory to improve over time by being trained with new data, typically through an artificial neural network or other training model. Deep learning, a subset of machine learning, is considered limited memory artificial intelligence.


3. Theory of mind: Theory of mind AI does not currently exist, but research is ongoing into its possibilities. It describes AI that can emulate the human mind and has decision-making capabilities equal to that of a human, including recognizing and remembering emotions and reacting in social situations as a human would.


4. Self aware: A step above theory of mind AI, self-aware AI describes a mythical machine that is aware of its own existence and has the intellectual and emotional capabilities of a human. Like theory of mind AI, self-aware AI does not currently exist.


Evolution of AI


A simple way to think about AI is as a series of nested or derivative concepts that have emerged over more than 70 years:


History of AI


Machine Learning


Directly underneath AI, we have machine learning, which involves creating models by training an algorithm to make predictions or decisions based on data. It encompasses a broad range of techniques that enable computers to learn from and make inferences based on data without being explicitly programmed for specific tasks.


There are many types of machine learning techniques or algorithms, including linear regression, logistic regression, decision trees, random forest, support vector machines (SVMs), k-nearest neighbor (KNN), clustering and more. Each of these approaches is suited to different kinds of problems and data.

But one of the most popular types of machine learning algorithm is called a neural network (or artificial neural network). Neural networks are modeled after the human brain’s structure and function. A neural network consists of interconnected layers of nodes (analogous to neurons) that work together to process and analyze complex data. Neural networks are well suited to tasks that involve identifying complex patterns and relationships in large amounts of data.

The simplest form of machine learning is called supervised learning, which involves the use of labeled data sets to train algorithms to classify data or predict outcomes accurately. In supervised learning, humans pair each training example with an output label. The goal is for the model to learn the mapping between inputs and outputs in the training data, so it can predict the labels of new, unseen data.


Deep Learning


Deep learning is a subset of machine learning that uses multilayered neural networks, called deep neural networks, that more closely simulate the complex decision-making power of the human brain.

Deep neural networks include an input layer, at least three but usually hundreds of hidden layers, and an output layer, unlike neural networks used in classic machine learning models, which usually have only one or two hidden layers.

These multiple layers enable unsupervised learning: they can automate the extraction of features from large, unlabeled and unstructured data sets, and make their own predictions about what the data represents.

Because deep learning doesn’t require human intervention, it enables machine learning at a tremendous scale. It is well suited to natural language processing (NLP), computer vision, and other tasks that involve the fast, accurate identification complex patterns and relationships in large amounts of data. Some form of deep learning powers most of the artificial intelligence (AI) applications in our lives today.


Deep Neural Network


Deep learning also enables:

  • Semi-supervised learning, which combines supervised and unsupervised learning by using both labeled and unlabeled data to train AI models for classification and regression tasks.

  • Self-supervised learning, which generates implicit labels from unstructured data, rather than relying on labeled data sets for supervisory signals.

  • Reinforcement learning, which learns by trial-and-error and reward functions rather than by extracting information from hidden patterns.

  • Transfer learning, in which knowledge gained through one task or data set is used to improve model performance on another related task or different data set.


Generative AI


Generative AI, sometimes called “gen AI”, refers to deep learning models that can create complex original content — such as long-form text, high-quality images, realistic video or audio and more — in response to a user’s prompt or request.

At a high level, generative models encode a simplified representation of their training data, and then draw from that representation to create new work that’s similar, but not identical, to the original data.

Generative models have been used for years in statistics to analyze numerical data. But over the last decade, they evolved to analyze and generate more complex data types. This evolution coincided with the emergence of three sophisticated deep learning model types:

  • Variational autoencoders or VAEs, which were introduced in 2013, and enabled models that could generate multiple variations of content in response to a prompt or instruction.

  • Diffusion models, first seen in 2014, which add “noise” to images until they are unrecognizable, and then remove the noise to generate original images in response to prompts.

  • Transformers (also called transformer models), which are trained on sequenced data to generate extended sequences of content (such as words in sentences, shapes in an image, frames of a video or commands in software code). Transformers are at the core of most of today’s headline-making generative AI tools, including ChatGPT and GPT-4, Copilot, BERT, Bard and Midjourney.


Key Components of AI Applications


AI applications generally involve the use of data, algorithms, and human feedback. Ensuring each of these components is appropriately structured and validated is important for the development and implementation of AI applications. The discussion that follows highlights how each of these components influences the development of AI applications.


· Data: Data generation in the financial services industry has grown exponentially over the past decade, in part due to the use of mobile technologies and the digitization of data. The importance of data has likewise rapidly increased, and some have even referred to data as a more valuable resource than oil. 14 Furthermore, cloud technology has enabled firms to collect, store, and analyze significantly large datasets at very low costs. Firms in the financial services industry now collect data from a variety of internal sources (e.g., trading desks, customer account history, and communications) and external sources (e.g., public filings, social media platforms, and satellite images) in both structured and unstructured formats, and analyze this data to identify opportunities for revenue generation as well as cost-savings. This explosion of data in the financial services industry is one of the key factors contributing to the increased exploration of AI in the industry. Data plays a critical role in the training and success of any AI application. AI applications are generally designed to analyze data by identifying patterns and to make determinations or predictions based on those patterns. The applications continuously and iteratively learn from any inaccurate determinations made by such applications, typically identified through human reviews as well as from new information, and refine the outputs accordingly. Therefore, AI applications are generally best positioned to yield meaningful results when the underlying datasets are substantially large, valid, and current


  • Algorithms: An algorithm is a set of well-defined, step-by-step instructions for a machine to solve a specific problem and generate an output using a set of input data. AI algorithms, particularly those used for ML, involve complex mathematical code designed to enable the machines to continuously learn from new input data and develop new or adjusted output based on the learnings. An AI algorithm is “not programmed to perform a task, but is programmed to learn to perform the task.”15 The availability of open-source AI algorithms, including those from some of the largest technology companies, has helped fueled AI innovation and made the technology more accessible to the financial industry.


  • Human interaction: Human involvement is imperative throughout the lifecycle of any AI application, from preparing the data and the algorithms to testing the output, retraining the model, and verifying results. As data is collected and prepared, human reviews are essential to curate the data as appropriate for the application. As algorithms sift through data and generate output (e.g., classifications, outliers, and predictions), the next critical component is human review of the output for relevancy, accuracy, and usefulness. Business and technology stakeholders typically work together to analyze AI-based output and give appropriate feedback to the AI systems for refinement of the model. Absence of such human review and feedback may lead to irrelevant, incorrect, or inappropriate results from the AI systems, potentially creating inefficiencies, foregone opportunities, or new risks if actions are taken based on faulty results.


Benefits of AI


· Automation : AI can automate workflows and processes or work independently and autonomously from a human team. For example, AI can help automate aspects of cybersecurity by continuously monitoring and analyzing network traffic. Similarly, a smart factory may have dozens of different kinds of AI in use, such as robots using computer vision to navigate the factory floor or to inspect products for defects, create digital twins, or use real-time analytics to measure efficiency and output.


  • Reduce human error : AI can eliminate manual errors in data processing, analytics, assembly in manufacturing, and other tasks through automation and algorithms that follow the same processes every single time.

  • Eliminate repetitive tasks : AI can be used to perform repetitive tasks, freeing human capital to work on higher impact problems. AI can be used to automate processes, like verifying documents, transcribing phone calls, or answering simple customer questions like “what time do you close?” Robots are often used to perform “dull, dirty, or dangerous” tasks in the place of a human.

  • Fast and accurate : AI can process more information more quickly than a human, finding patterns and discovering relationships in data that a human may miss.

  • Infinite availability : AI is not limited by time of day, the need for breaks, or other human encumbrances. When running in the cloud, AI and machine learning can be “always on,” continuously working on its assigned tasks.

  • Accelerated research and development : The ability to analyze vast amounts of data quickly can lead to accelerated breakthroughs in research and development. For instance, AI has been used in predictive modeling of potential new pharmaceutical treatments, or to quantify the human genome.


6 views0 comments

Recent Posts

See All

Comentários


bottom of page