By Hudson Cooper


The traditional 60/40 portfolio is a good example of a portfolio that has been constructed to manage volatility. Because fluctuations in the value of equities and fixed income securities are largely uncorrelated, they are able to offset each other’s volatility without sacrificing too much in terms of expected returns. 50/50 or 40/60 portfolios work in the same basic way, but they are tuned for slightly different overall volatility tolerance levels.


Let’s look at a concrete example and compare SPY to a 60/40 portfolio of SPY and TLT, an ETF tracking the value of  20+ year treasury bonds. To put the two on equal footing, the 60/40 portfolio has been scaled up to have the same realized ROI as SPY since 2003.


Looking at their largest drawdowns over this period, we can actually see that the 60/40 portfolio has done a decent job of reducing the impact of these events. The magnitude of the 2008 financial crisis was reduced by almost 44%, and its duration was reduced by about 43%. Several of the other large drawdowns were similarly abated.



So if 60/40 has worked so well in the past, why am I harping on about the ineffectiveness of traditional portfolios? What’s really crucial here is that we cannot take historical data at face value. The most extreme event in history is a really poor estimate of the most extreme event we should expect to see in the future, because in a real sense, “the worst has yet to come”. We need to be able to mitigate the risk of the next big “unprecedented” event, and that means we need to move beyond looking at raw historical data.


By looking at how the rarity of drawdowns changes as they get rarer and rarer, we can infer how frequently to expect the unprecedented drawdowns, even if we don’t necessarily have explicit models of the mechanisms that will cause them. Let me show you visually what I mean.


This graph shows the frequency of drawdowns on the Y-axis plotted against their magnitudes on the X-axis. The frequency here is expressed as the probability that any drawdown lasting at least a week meets or exceeds the magnitude expressed on the X-axis.


The scattered points correspond to the historical data (3-month time horizon), while the dotted lines correspond to statistical models that let us make inferences about behavior far into the upper tail. The shaded regions are confidence intervals that let us express the uncertainty in our models due to variability in the historical data.


Here we are just looking at the smallest and most frequent drawdowns, the ones that occur in 90% of the cases and that are less than around 10% in magnitude. We can see a really clear separation between SPY and the 60/40 portfolio here, meaning that 60/40 is able to effectively reduce the frequency of these drawdowns. This is exactly as we should expect since the 60/40 portfolio is designed to mitigate these small, volatility-driven risks.  



Now let’s zoom out and look at the rarer and larger drawdowns. The separation that we saw before really breaks down. Particularly as we look at drawdowns occurring with probability less than 1% and with losses of 20, 40, 60%, and above, the model uncertainty is simply too high to conclude that the 60/40 portfolio effectively manages these risks. Quantifying this uncertainty is a really important step because we would otherwise feel unjustifiably safe in the 60/40 portfolio and make ill-informed investment decisions. 


This means that we really can’t trust that these kinds of portfolios that are often sold as “safe” actually protect our assets from the extreme events that are rare but catastrophic and may occur relatively frequently over longer time horizons like those that are preparing for retirement might be facing.


It is important that we manage the risks that actually threaten our goals, and that means we need to move away from volatility-based approaches to approaches that model and manage long-term tail risks directly. 


Thankfully, there is a whole field of statistics called “Extreme Value Theory” that has been built to estimate, model, and reason about the magnitudes and frequencies of extreme and unprecedented events. It has its roots in meteorology and the climate sciences and is used to estimate the size of one-in-100-year floods and to detect systematic changes in the occurrence rates of extreme weather events due to climate change. Combining this statistical framework with the flexibility, adaptability, and efficacy of modern AI and Machine Learning has really opened the door to building portfolios that can extract profit while being far more robust to these risks that we care about.

In our mission to help lead the financial community to embrace cutting-edge technologies and algorithmic approaches, we are thrilled to share with you our debut episode of AlphaBytes, an educational video series.  


The purpose of this series is to bring educational content and awareness to the rapidly changing dynamics of the financial markets. 


Since AlphaTrAI has been built fundamentally around risk & return and dynamic risk management, we thought it would be suitable for us to discuss “Risk Management in a World of Rising Yields”. 


Topics we discuss are:

  • Problems with the Traditional Approach of Risk Measurement
  • Impact of Larger Risks on a 60/40 Portfolio
  • Declining vs. Rising Bond Yields


Click below to watch:


By Andreas Roell


Last week’s post had the purpose of deciphering the various approaches to trading in the financial markets. It was stimulated by the difficulty that the financial industry community has with classifying the emerging mechanics of algorithmic trading, the category AlphaTrAI is part of.


In the first blog, I was describing algorithms-at-the-core traders like us as autonomous traders for descriptive purposes.This week, I want to go a step further and dissect the state of autonomy for firms like AlphaTrAI.


How autonomous is the current state of possibilities? What does autonomy truly mean? Perhaps even how to ask smart questions to firms like us to truly identify how far on the spectrum one is.


To make this post exciting for you, there is a chance that I might shock you with my perspective despite the fact that I am the chief executive officer of an algorithms-at-the-core shop.


First things first. The best way to think about this discussion is to align it to the commonly adopted classification of autonomy in automobiles. 


Level 0 – No Automation
This describes your everyday car. No bells and whistles. Just your ordinary cruise control to help with long-distance driving and minimize the risk of a speeding ticket from a lead foot. Almost all cars today will offer Level 0 autonomous technology.


Level 1 – Driver Assistance
Here we can find your adaptive cruise control and lane-keep assist to help with driving fatigue. Adaptive cruise control will keep a safe distance between you and the car ahead of you by using radars and/or cameras to automatically apply to brake when traffic slows, and resume speed when traffic clears. Lane keep assist will help nudge you back into the lane should you veer off a bit. These systems will assist drivers but still require the driver to be in control. You can find Level 1 autonomy in most cars today.


Level 2 – Partial Automation

This is where it gets a bit more interesting. Although the driver must have hands on the wheel and be ready to take control at any given moment, level 2 automation can assist in controlling speed and steering. It will help with stop-and-go traffic by maintaining the distance between you and the vehicle in front of you, while also providing steering assist by centering the car within the lane. These features are a godsend for commuters! Tesla Autopilot, Volvo Pilot Assist, Audi Traffic Jam Assist are some examples of Level 2 autonomous capabilities.


Level 3 – Conditional Automation
Level 3 autonomous vehicles are capable of driving themselves, but only under ideal conditions and with limitations, such as limited-access divided highways at a certain speed. Although hands are off the wheel, drivers are still required behind the wheel. A human driver is still required to take over should road conditions fall below ideal. In some cases, certain carmakers, like Mercedes, are offering these capabilities but have them limited for a maximum amount of hands-off-the-wheel periods of time.


Level 4 – High Automation
K.I.T.T.? Is that you? Not just yet. Level 4 autonomous vehicles can drive themselves without human interactions (besides entering your destination) but will be restricted to known use cases. We’re not too far from seeing driverless vehicles out on public roads. Though regulations constrict its availability, Waymo has developed and is in the process of testing Level 4 vehicles capable of driving themselves in most environments and road conditions. If there were no regulations or legal obstacles, you’d likely see more level 4 vehicles on the road today! This is pretty much what Tesla has been offering to premium model drivers, while the government has not fully provided clearance for it.


Level 5 – Full Automation Super Pursuit Mode!

At Level 5 autonomy, we arrive at true driverless cars. Level 5 capable vehicles should be able to monitor and maneuver through all road conditions and require no human interventions whatsoever, eliminating the need for a steering wheel and pedals. Combine a Level 5 autonomous car with a capable voice assistant and you’ll have your very own K.I.T.T.! Although many of the technological components exist for an artificially intelligent car today, due to regulations and legal battles, Level 5 vehicles are probably still many years away. Until then, we’ll have to settle for partial automation.


So let’s take this to our world, financial trading.


Level 0 – No Automation
This describes your typical retail investor. 100% manual approach to trade decisions and executions.


Level 1 – Investor Assistance
Here we can find human traders are taking advantage of software and data analytics to inform their manual decision-making and trading. This is likely your typical technical trader that uses floor and ceiling breakthroughs via a software-related setup, for instance. This is more prevalent in some institutional investment firms.


Level 2 – Partial Automation

Humans take advantage of automated alerts, notification systems to drive faster and more comprehensive trade decision making.


Level 3 – Conditional Automation

Trading is automated but only under ideal conditions and with limitations. Here is where we typically find our traditional concepts of Quantitative Traders. Investment decisions were created by humans via a so-called rules-based approach. As such, a human investment manager will be required to have constant supervision and step in for manual decisions or adjustments, if the market conditions fall below ideal conditions. A key element of this level is the fact that this human-created, rule-based automated decision approach does not have the ability or has extremely limited conditional self-learning. As such, the scope of automation is limited to narrow and historical-based market conditions.


Level 4 – High Automation

In this level, self-learning of market conditions and self-judgment by algorithms is introduced to trade decisions. Algorithms and models are not only able to recognize the familiarity of conditions based on previous market states but are starting to create awareness and concept of modified or even new market states. As such, Level 4 algorithmic executions display an ability for finding new trade signals and resulting trade decisions on their own. These characteristics require significantly less human intervention as the models are able to cover a significant amount of ever-evolving market conditions. If developed properly, these level 4 algorithms have the ability to also have a strong awareness of making decisions under varying market conditions in relation to risk-reward profiles of its trades. Ultimately, creating an automated risk control mechanism. This is a significant deviation from Level 3, as in this incremental level of algorithmic automation, the trade positions can show more of a healthy, situational weighting of risk exposure versus the all or nothing positions at Level 3. Just for clarity, Level 4 still requires human oversight as the potential for course correction is a sustaining, new market condition that cannot be figured out by algorithms on their own. A great example of this “blind spot” is a systems’ inability to deal with rising bond yields after many historicals had reduced yields in their playbook.


Level 5- Full Automation

The ultimate vision! Turn on that magic switch, give the algorithms the objectives you like to accomplish and sit back with a cocktail and let it run. Here, algorithms evolve into such a general state of awareness and ability to recognize not only the changing market conditions but most importantly are able to navigate a brand new state of markets that it has never seen before. In such an environment, the algorithms were originally programmed by human interceptors but done in a way where the general approach to signal detection and risk and reward was established in an ever learning and general AI fashion so that any future intervention will not be necessary. Humans are solely supervisory in nature and only interfere during emergency situations, such as moment-based crises like a War.


The reality of the current state of algorithmic advancements overall and specifically how it is being applied is that the vast majority of algo-traders fall into Level 3. There is obviously nothing wrong with systematic trading but it is important in my mind for everybody to understand the level of human versus algorithmic controls. With that unique benefits and disadvantages come to fruition that investors need to buy into when allocating their capital. We at AlphaTrAI classify ourselves at a Level 4 of autonomy and are confident about the remaining synergies between humans and algorithms. For us, it is a matter of pursuing a Level 5 world as our core belief states non-emotional and highly scalable trading decisions by algorithms have a stronger sustainable effect on investment returns.

By Andreas Roell


I have been lucky enough in my career to be part of new phases of innovations in long-established industries.  At the start of my career, the Internet first emerged and the dot com companies began appearing. Then, I founded one of the first digital marketing agencies during a time when advertisers put all of their money into TV or radio commercials.  Later, I incepted one of the first VOD platforms in the Middle East when the entire Arabic World was glued to TV screens.  All of these experiences had several common themes, but the one I would like to highlight today is the fact that individuals (i.e. consumers, users, customers, etc.) find it very difficult to categorize a new offering in relation to what they are familiar with.


Fast forward to today and I am having the same experience in the asset management and hedge fund world. As you probably know, AlphaTrAI is an AI-based asset management firm. In our many conversations with investors, I have noticed a natural tendency and desire to label us somewhere within the categories that they are familiar with.  Of course, this is not the first time a firm similar to ours has arrived on the scene. It happened when technical traders started to arrive and also with the advent of quantitative funds.


As a result, a clear explanation of how we classify into other categories of hedge funds has helped us make faster progress. So, I thought I would share with you how we categorize the various hedge fund offerings.



I recognize that this might be a simplified view, however, I have learned that a simplified approach to a more complex discussion makes a lot of sense.  Since the space is constantly evolving, it would be great to get from you any ideas to further build this out. Who knows? Maybe the power of social media makes our collective end product a “standard” or commonly adapted communication tool.


By Katherine Paulson


As a marketer in the financial industry for over twenty years, I have experienced many challenges and limitations due to the highly regulated environment.  Although I often envied my professional peers who had tangible “products” that they can see and touch, at the end of the day, I was happy to meet the challenge.  Namely, because I wanted to simplify and bring transparency to the complex nature of financial products.  I’ve always felt a deep responsibility to my audience because they were making real-life choices with their hard-earned savings that could impact their retirement and financial future.


Entering into the world of private investing and hedge funds adds another level of complexity and scrutiny for marketers.  In fact, prior to September 2013, hedge funds were not permitted to advertise at all. Click here to read more. Now, however, with the signing of the JOBS Act, the ban on general solicitation for certain funds has been lifted.  This change creates many new opportunities for hedge fund managers like AlphaTrAI to bring transparency and education to our clients.


When you factor in the trauma of the financial crisis in 2008 and the recent revelations about the impact of social media on retail investors, the financial industry has woken up to the importance of their brand and messaging.  We more clearly realized that building the public’s trust and reputation was paramount.  In an industry dominated by analysts, mathematicians, and regulators, this was not an easy realization.  Marketing often took the back seat in priority, but no longer is this the case. 


It is an exciting time because these new perspectives and rules have opened up opportunities for hedge fund managers to spread awareness of our industry.  We can now promote using what many other products take as a given, including print and digital magazines, newspapers, TV, podcasts, websites, LinkedIn, Twitter, Facebook and other social media platforms.  Click here to read more about rule 506(c)


This new freedom enables stronger communications between our potential investors.  While we understand that hedge funds are an exclusive market, they are still an important part of the global financial ecosystem.  This is why it is important to keep in mind the audience we are messaging to is clear, honest, and specific.  


I believe we are on the verge of a breakthrough in hedge fund marketing.  Much like our products, the marketing of hedge funds is emerging.  At AlphaTrAI we are in a position to both disrupt the asset management industry with our AI-enabled fund as an emerging asset class and in our marketing methodology.  This moment offers us the opportunity and responsibility to connect with our potential clients as we never have before.


At AlphaTrAI, we strive to provide the most reliable, sustainable, and insightful algorithmic-based financial products.  I’d like to highlight the last part of our vision.  Bringing insights, providing education, transparency, and awareness in an innovative way – these are the tenets that we work toward achieving.  I remain optimistic and passionate about the future of hedge fund marketing, because so much progress has been made.  As marketers, we will need to seize upon this opportunity as pioneers in influencing marketing methodologies going forward emphasizing education, awareness, and transparency.


By Andreas Roell


In part 1 of my post, I left you with a question.  What would be the lasting impact of these unprecedented recent stock market events we just experienced?  It is still uncertain if this is a blip or the start of a new market dynamic.  


I did predict a strong likelihood of regulatory scrutiny which started last week with the new U.S. Treasury Secretary Janet Yellen, calling a meeting of key financial regulators to discuss market volatility driven by retail trading in GameStop and other stocks. I believe this is just the beginning of scrutiny from the regulators that will continue.


Back to my question regarding market dynamics, we have all witnessed the power and influence of social media and its ability to turn politics upside down, and we can now clearly see it can do the same for the financial markets. 


Social media sentiment and trend analysis is one that has been around for some time, but we will likely hear more about it as a shiny object for algorithmic / NLP traders.  Rest assured, this trend is something that will be discussed heavily throughout this year and beyond.


What I am more interested in is monitoring how much of a fad this scenario is or not. A couple of key points here:


  • I still believe that retail investors will ultimately get hurt by this directly or indirectly.  The same “mainstreamers” that are taking down the big hedge funds have their 401(k) and retirement savings invested with them. 


  • The more volatile markets become (for me volatility simply means more confusing and less systematic) the harder and frustrating it will be for retail investors which will translate into frustration and heartburn.


Instead of heralding a new wave of investor populism, the rise and fall of GameStop’s stock may end up reinforcing what the financial industry has known for a very long time.   Namely, the fact that they have infinitely more access to tools, data, and resources than the average retail investor.   Click here to read more about who benefited during the Reddit trading frenzy.


At AlphaTrAI, we do not solely rely on the historical data because it is fragile to these specific and unpredictable events. However, our algorithms daily monitor market events and we remain steadfast in our approach to have a high appreciation for tail risk events. 

By Andreas Roell


In light of the recent, unprecedented circumstances surrounding the U.S. equity markets, I wanted to provide you with my perspective and insights on how our algorithmic approach differentiated us.


Before I go into the specifics of our algorithmic models’ autonomous decision making, I encourage you to read this great explanation and analysis in The New York Times click here.


This retail investor mobilization has exposed some phenomenal, sometimes hidden dynamics of our financial systems. First of all, the magnitude of retail investors has grown (with the help of the stay-at-home mentality of the pandemic) to a record 25% of the stock markets’ activity. This is up from only 10% in 2019. Contributing factors include the wave of no-fee, direct trading platforms, such as Robinhood or Fidelity, and the free-flowing group dynamics of social media. I learned an interesting tidbit during a recent conversation I had this past week with a long-standing member of the now-famous Reddit thread r/wallstreetbets, who told me that the GameStop cry for long positions started back in 2020 without aiming to mobilize as a “let’s go after the evil hedge fund shorters”. It was simply a single post calling out detection of a high short position that was made. It reminded me of my early days in social media marketing, where I frequently referred to the video from the 2009 Sasquatch music festival (see here) that shows the power of group dynamics. 


The convergence of these multiple types of dynamics is truly unique to what just happened last week. Since there has never been a period in stock market history like this, one is faced with the challenge where the previous playbook that typically analyzed historical data for decision making cannot be applied to find the best decision process. 


At AlphatrAI, our algorithms immediately classify such an occurrence as what is known in the science world as a “non-stationary time-series” data event. It simply means that a stream of event data that occurred through the course of this “retail revolt” has never been seen before in stock market history. So all traders, including quantitative, technical, algorithmic, and even human traders, who solely rely on historical events to drive their present investment decisions, were at an absolute loss this week. We saw this with Melvin Capital, who manages a human-based short trading hedge fund with a phenomenal track record. In my summarized perspective, they were completely caught flat-footed and did not have a hedging or counter game plan when the retail long traders started building momentum. Such reliance on a single “signal” strategy is something that we frequently talk about with the investor community. A sustainable, long-term viable investment strategy must be one that does not rely on a limited signal strategy and must be agile enough to adapt to new “game” situations.


These events highlight that the majority of funds are based on portfolios designed to deal with frequently observed, small downturns in the market. This approach is necessarily fragile to the large rare risks (so-called tail events) that can occur over small to large time horizons.  Here is a brief description. These are the events that matter most when maintaining sustainable performance in the financial markets. The high appreciation for these tail risk events is a fundamental invention of our models. We use Machine Learning to leverage information as effectively as possible. We use this information to construct portfolios that we believe are designed to achieve gains while managing tail events that can be catastrophic for other portfolios. If you have not had the chance, I encourage you to read our white paper here, which discusses the depth of the negative impact on a portfolio from such events and AlphaTrAI’s view on portfolio construction. 


How do we strive to avoid catastrophic losses?

  • Ensembles of models that diversify not just our exposure to asset classes but also diversify our exposure to behaviors/models/strategies.
  • Dynamic models that assess risk and reward balance daily and automatically adjust exposure to the market.
  • Evaluating and selecting strategies based on probabilistic models of their extreme (tail) risk characteristics.



The lasting impact of these recent stock market events is uncertain. Nobody knows if they are a blip or the start of a new market dynamic. However, I strongly believe that we have not seen the last in terms of regulatory scrutiny. A future change in trading is likely, whether it is limitations to short trades, reduction of full market access to retail investors, or others. What I can tell you now is that this “unprecedented” or “non-stationary” event will not be the last we will have in our lifetime. 


The stock market is dynamic, it is riskier than what the majority of investors think. It requires an ability to manage risk as much as possible while making decisions for a brand new scenario as quickly as possible. Here at AlphaTrAI, we continue to see how successful models can navigate through this incredibly difficult situation. All in an autonomous fashion. I feel fortunate to have launched our fund at a time that has given us the chance to prove how the models perform during some of the most difficult times in stock market history. Last week (and maybe this coming week, and beyond) was just another proof point that we are proud of. 

By Andreas Roell


Last week, I had the pleasure of speaking on a virtual panel to an audience with a mix of quantitative and traditional asset managers and advisors. The conversation very quickly shifted to the topic of how to make allocators and investors comfortable with algorithmic investing versus what they are currently doing. 


As our discussion progressed, it occurred to me that there is a good deal of work to be done to help bring allocators and investors along the educational journey to give them a confident foundation of how algorithms work and what questions to ask.


However, what I became more curious about is the level of questions or diligence that algorithmic/quantitative asset managers go through in comparison to traditional asset managers. Obviously, it is our responsibility to increase credibility and time will help in driving comfortability among investors. At the same time, however, I feel that there is an opportunity for algorithmic asset managers like ourselves to take a look in the mirror when exposed to perhaps imbalanced scrutiny.


Here is an example of what I am referring to. In a recent conversation with a fellow asset manager, I was asked questions specifically around our algorithms, such as: How do they work? What are the specific data sets that are used? What market triggers drive decisions? What are the specific signals that make our models work? Among others.


I tried to answer them as thoroughly as I could without giving away trade secrets and felt pretty satisfied with my answers, but one never knows, as some of the questions cannot be simply answered without getting into details.


Since he was a fellow manager of a hedge fund (over $200mm in AUM) who focused on “human based” investment strategies, in this case a sector specific pair trading approach, I started to turn the tables on him and began asking him counter questions.


Questions like:  What are some examples of pairs that were successful for him last year? How does he typically identify a pair? How long does a pair signal last for him?  


A couple of his responses were surprising. One was, “You are asking me more detailed and technical questions than my investors.  Neither seed or institutional investors have ever asked me these questions in my 30 year investment career.” Second, “I am not able to tell you all the details as I would give you my recipe.” Funny, I thought, since these are questions I face in every conversation that I have with investors.


This contrarian perspective was validated a week later when I talked to the head of all products at one of the most well known financial institutions in the world. He as well appeared to start “geeking out” on us with curiosity the longer our conversation went. Here again, once he was done firing question bullets at me, I asked him simply if these are the types of questions his clients ask him when deciding or choosing from their offered menu of investment products. The clear answer was a laughing no. 


I started thinking about why this is the case. Why is there so much more interest, diligence or simply curiosity in the detailed mechanics of investment decisions when it comes to algorithmic trading solutions than we have with traditional fund offerings?


It reminded me of my digital marketing days, when I started my first agency in the late 90s and helped a large portion of leading hotel companies use the early days of the internet to generate direct reservations. There, I had to compete for media budgets with the traditional formats, such as television ads. During that time, I just could not understand why every dollar I spent on an ad for them was held accountable to revenue generation and TV had no clue at all about their impact. Fast forward to now, and this perspective has changed in both directions. TV has lost a significant amount of their media budget share, while also being tracked as much as possible. While on the other side, digital advertising has become also a medium for immediate revenue generation (what is called Direct Response), but also for awareness and branding purposes.


So, the moral of the story here is that over time in advertising, marketers learned from both sides and applied layers from one to the other. This is something that I predict will happen in the asset management industry as well over time.


“Traditionalists” will become accustomed to higher scrutiny and detailed questions about their investment decision process, while the acceptance of algorithmic-at-the-core traders will increase.  The level of understanding and comfort around proprietary decision making processes will increase. And as such, the algorithmic side of the coin will receive less scrutiny.  This is at least my prediction. 


Here at AlphaTrAI, where we are obviously an algorithmic asset manager, we believe that we have an obligation to increase the level of understanding and thus comfort that investors have at this point in time with algorithmic investing. 


We aim to work hard first, at bringing more knowledge of this new approach into the investment community, and at the same time, we know that we need to show up as transparent and less secretive as much as possible. It is our belief that this is the obligation of all algorithmic investors. The more we do it, the better our collective reputation and investor willingness to shift to this new form of asset management will be. Just like what happened during my early days in digital advertising.


Happy AI investing.

By Karyn Williams


Most of us take for granted that asset managers deliver services to investors through investment products. Whether actively managed to pursue alpha or passively managed to track broad indexes, asset managers sell, and investors (advisors) buy them for portfolios. There are over 10,000 such products around the globe competing for a place in your investment portfolio. With advances in artificial intelligence and data science, this is about to change radically. 


Advances in computing power and the availability of real-time data on virtually every aspect of the global economy have changed the investment game. No matter how astute any human fund manager is, no person or team is as capable of processing, prioritizing, and analyzing all of this data as a set of well-calibrated algorithms. 


This state of play puts immense pressure on active asset managers. For those whose processes have not changed appreciably over the past ten years, alpha is increasingly hard to produce. They will have to innovate or otherwise compete with large fund complexes that offer nearly zero cost passive products.


Adding to this pressure are low yields. An investor who targets 5% for his or her investments to support for future goals and wants to keep pace with 2% and cover investment fees of 1.5% has to earn over an 8% annually. But with 10-year Treasury yields below 1%, as of this writing, investors have difficult portfolio choices to make: lower the overall cost of investing (by moving from active to passive products), increase the risk of the portfolio, change the investment objectives, or all of the above. 


A traditional investment product that fits within an asset class will not fill the return gap. They usually comprise between 1% to 5% of a portfolio, so even with expected outperformance it will have a small overall impact. A multi-asset product also may not help. Off-the-shelf and one-size-fits-all, they are designed for an average investor and do not necessarily fit with the rest of the portfolio. Private market and quant products may not help either, presuming the investor can get access to the best products. Not only are fees high, but the potential for large portfolio losses can actually lower long-term portfolio returns. 


Rather than traditional investment products, investors need solutions, and this is precisely what AI and data science can provide.


AI techniques can improve how we measure and manage market risks compared to traditional approaches. Applied to the investor’s challenge, an AI-based solution that maintains a portfolio’s exposure to equity markets while moderating large declines in value, can improve investor outcomes. AI also can better systematize data, interpret investor profiles, and control certain decision biases. An AI-based solution that is designed specifically for an investor’s unique risk capacity and can control biases can further improve investor outcomes. 


To sum up, the next generation of AI-enabled investment solutions not only better measures and manages portfolio risks, they are customized to serve each investor’s specific needs. At scale, AI-based solutions replace the traditional products we see in the market today.


Arguably, one of the roadblocks with the adoption of AI across the investment management industry is attitudinal, with the fear of AI either displacing advisors or making investment decisions that investors do not understand and cannot monitor. 


Approached correctly, with understandable algorithms that are aligned with client objectives and free of conflicts of interest, AI can help investment professionals to serve their clients better. AI frees them to focus on what their clients care about – achieving investment objectives. It replaces complex conversations about the latest products with overall value creation. By creating meaningful value, advisors become even more essential and clients feel confident.


AI and data science can drive a profound and positive transformation for investors. Better risk measurement and management, customization, and a holistic approach will upend products in favor of solutions, driving considerably better outcomes and empowering investors to invest more confidently.

By Katherine Paulson


This week’s blog is an introduction to some of the basic terminology used in Artificial Intelligence (AI) and their definitions.  As someone new to AI, but a veteran of the financial services industry, I made an effort to start learning quickly about the basics when I arrived here at AlphaTrAI. Once I understood these terms, a new world of asset management was open to me and the exciting possibilities that came with it.  


Although it seems like AI terminology just recently started appearing online and in social media, it’s not new, it has been around for decades.  In fact, the term “Artificial Intelligence” was coined in 1956 at Dartmouth College.  The reason for AI’s incredible growth in the last decade is because of these key developments:  


  • Computational Power
  • Access to Algorithms and Libraries
  • New Research
  • Access to Data


The growth in AI investing started in 2011 and continued its upward trajectory with a big acceleration between 2015 through mid-2019 when VC’s invested $60B in AI related startups. 

The Basics


When thinking about AI terminology, let’s begin at the top, Artificial Intelligence, Machine Learning (ML)  and Deep Learning (DL).  Artificial Intelligence encompasses the entire scope, Machine Learning is a subset within AI and Deep Learning is a subset of Machine Learning as illustrated here:

In the simplest terms:


Artificial Intelligence = Computer is doing something human

Machine Learning = This computer is learning

Deep Learning = This computer is learning in a specific way


Two types of Machine Learning


Now, let’s move on to Supervised Learning vs. Unsupervised Learning.  The main thing to remember in Supervised Learning is that it is all about labeled data.  In Supervised Learning, imagine a manager who oversees all of the data and tells the computer exactly what that data is.  The computer is fed with examples, directed to determine what is correct or incorrect and told their differences with the end goal being able to train it to identify its features. Features are attribute information you know to be important for making a prediction. Then, when new data that has never been seen before is introduced, based on the model that the computer created (think of a model being a shortcut to a pattern the computer saw), it creates an output with a prediction of what that new data is.  An example output may be expressed as “65% confident”. 


In Unsupervised Learning, you guessed it, it is the opposite – there are no labels, no manager and no direction.  The computer or system is provided a data set and it organizes the data in its own way.  For example, a popular algorithm in Unsupervised Learning is “clustering”.  Clustering is the task of dividing the population or data points into a number of groups such that data points in the same groups are more similar to other data points in the same group and dissimilar to the data points in other groups.  Clustering algorithms are popular for customer segmentation.


Going back to the basic definitions of AI, ML and DL, remember AI can be described simply as the computer doing anything human-like that can range from autonomous cars to a very sophisticated set of if-then statements that recreate human behavior.  Within AI is ML which is most of what we read and hear about.  So, to differentiate ML vs. DL, we need to distinguish what separates DL.


What is Deep Learning?


Deep Learning is specific to its structure that mimics the human brain, also called Neural Networks.  Neural Networks are set up to model how humans process information.  They are generally more computationally intensive than traditional ML.  Artificial Neural Networks (ANN) can look as simple as one input layer, one hidden layer and one output.  However, Deep neural networks have more than one hidden layer, that is what separates them.  Each hidden layer is a specialized function.  These multiple layers or depth allows you to create much more abstract features that can be extracted from the upper layers which enables the improvement of the overall flow of the network to classify something, an image for example.

To describe this in a simple way, let’s use an analogy.  A person with very poor vision with a prescription of -15 looks at an image and sees a complete blur.

Then, someone hands them a pair of glasses that improves their vision to -7.  Now, they start to see some clusters of pixels in the image.

Next, someone hands them a pair of glasses that improves their vision to -3 and they can determine that there are ears, body and paws in the image.

Finally, they are handed another pair of glasses that allow them to have 20/20 vision and now they see clearly the clusters form an image of their favorite pet.

Deep Learning = Multiple hidden layers that “build” on each other to “extract” higher level features like “paws”.


What is incredible to think about DL is the ability to apply this layered learning to the abstract.  Business applications are boundless.


I hope this brief overview gives you a better understanding of AI’s basic terminology and the next time it comes up in conversation, you won’t miss a beat.