Animating Stock Markets
We use AI to generate future image of stock’s market data with its historical image
Compare how realistic is an image generated by AI (right figure) with true image (left figure):
- Figure demonstrate daily market data of a stock
- White line – close; grey line – 20-day moving average; uppper bars – high/low; lower bars – volume
- Left chart – historical data
- Right chart – image generated with AI:
- context data (black background) – an input to create prediction
- predicted data (dark grey background) – prediction
Joint work with Kuntara Pukthuanthong (UM)
Presentations:
- Finalist of 2024 Dr. Richard A. Crowell Memorial Prize paper competition (the competition is still ongoing)
- Future of Financial Information Conference, Stockholm, 2024
- Midwest Finance Assosiation, Chicago, 2024
- Award for best presentation at KKF2023 by PSFiB
Abstract: Our study presents a revolutionary method called Variational Recurrent Neural Networks (VRNNs) that utilizes a series of graphs to predict future stock price trends. It works like an animated movie about price trends. We analyze data from the S&P500 index constituents, which are known to be less predictable than other traded stocks, between 1993 and 2021. Our model generates a Sharpe ratio above 1. Furthermore, our prediction of price changes, after adjusting for various price trend strategies and firm traits, strongly forecasts the weekly returns of firms.
Figure: The figure illustrates a single Variational Recurrent Neural Network (VRNN) cell that we use to generate stock image. In VRNN each cell contains a Variational Autoencoder (VAE) with Convolutional Neural Network (CNN) layers, which condenses high-dimensional data into a low-dimensional latent vector. The prior hidden state (ht−1), current latent state (zt), and new input frame (xt) are processed through a recursive function to yield the current hidden state (ht).
Just Look: Knowing Peers with Image Representation
We use AI to define a new companies’ similarity measure – common objects on image
This figure presents photos related to Johnson & Johnson and its four peers with the highest similarity score – from the left:
Argan Inc, WD-40 Co, Lifeway Foods Inc, and FMC Corp. Each set of photos is titled with peers names and similarity scores in the brackets. Each pair of photos demonstrates the cosine similarity distance measure.
Joint work with Kuntara Pukthuanthong (UM)
Presentations:
- Finalist of 2024 Dr. Richard A. Crowell Memorial Prize paper competition (the competition is still ongoing)
- Northern Finance Association, Toronto, Canada, 2023
- 2023 Hong Kong Conference for Fintech, AI, and Big Data in Business
- Workshop on Machine Learning for Investor Modelling, The Fields Institute for Research in Mathematical Sciences, Toronto, Canada, 2023
- BlackRock, 2023
- Trulaske College of Business / University of Missouri, Brown Bag Seminar, May 12, 2023
- Poznań University of Economics and Business, Seminar, March 24, 2023
- Missouri State University, Seminar, March 3, 2023
- University of Missouri-St. Louis, First CoBA Research Seminar of 2023, February 17 2023
Abstract: What does an industry look like? We present a novel approach to assess firm similarity by analyzing
four million visuals. Leveraging machine learning, we identify images representing companies’ operations,
forming Image Firm Similarities (IFS). IFS mirrors investor-defined peer groups and performs
competitively against SIC, GICS, NAICS, and text-based similarity, akin to the brain’s visual processing
superiority. This outperformance appears in pair trading, diversification, and industry momentum strategies.
The effectiveness of IFS is attributed to dynamic reclassification and high investor agreement within
industries, leading to significant demand and supply effects on stock prices. IFS excels in industries with
growth and intangibility.
Figure: The figure presents the architecture of image comparison. In the first step, images are standardized to dimensions
224x224x3, where the first dimension shows the height, the second the width, and the third the colors. Second, to detect
objects on photos, we use a convolutional neural network VGG-19 that is 19 layers deep (Simonyan & Zisserman, 2014). Third, we apply Principal Complement Analysis (PCA) to reduce this dimension and represent at least 70% of the variation. The reduced
vector has a dimension of 1x1x218. Finally, feature vectors are the input to define cosine similarity between two photos.