Applications of Image Driven Machine Learning

MPhil Thesis Defence


Title: "Applications of Image Driven Machine Learning"

By

Miss Man Hing WONG


Abstract

Image-driven machine learning brings both simplicities of feature 
engineering and prediction accuracy which outdoes template-oriented 
approaches and therefore has huge potentials in different facets of 
industrial applications. This thesis proposes two novel applications of 
image-driven machine learning in the areas of finance and human-computer 
interaction.

The first work of the thesis focuses on financial application. Stock 
forecast with candlestick patterns is heavily based on template-oriented 
and rule-based heuristics, which requires laborious sample labeling and 
profound financial expertise. These methods are retrospective and fail to 
capture premature or partial signals in candlesticks. Such rigidity limits 
the application of candlesticks primarily to classification tasks. Thus, 
we propose a novel, end-to-end deep learning model, GANStick, to address 
all these issues. GANStick is a conditional DCGAN-convolutional 
BiLSTM-based model which generates future predictive candlesticks to 
augment multistep time series forecasting with regression. GANStick has 
been empirically shown to significantly beat multiple baseline 
implementations, with an average error rate of 68% lower across all five 
timesteps on the dataset composed of 11 large-cap US stocks. GANStick is 
the first work in automating the workflow from candlestick pattern 
recognition and generation to quantifying future price volatility, with 
the novel generative candlestick approach using the generative adversarial 
network.

The second work of the thesis focuses on the applications in 
human-computer interaction. One-handed interactions on smartphone 
interfaces offer a prominent feature of highly mobile inputs, and hence 
the design factor of user reachability is essential to realizing the 
incentives. However, the sole consideration of physical characteristics, 
such as hand size, does not fully reflect the users’ cognitive choices of 
hand poses and the corresponding inertia. In this work, we first conduct 
6-week crowdsourcing tasks and collect 62,156 responses reflecting user 
cognitive preferences to 3,000 clustered UIs. Our analytics of the 
responses shows that user perceptions of button layouts are divergent to 
the physical characteristics. Accordingly, we propose machine learning 
models to predict the user’s choices of hand pose and the likelihood of 
switching hand poses in UI sequences. With an illustrative example, our 
models can serve as an auditing tool to assess the user reachability with 
one-handed interaction on smartphone interfaces.

The third work of the thesis is a substantiation of the second work where 
we apply layerwise relevance propagation (LRP) for explaining the model 
decisions in the second work. Designing smartphone interaction with 
single-handed interfaces are primarily solved by either on ergonomic or 
interaction gadgets. However, the fundamental way of auditing smartphone 
interfaces (UI) is neglected. This work proposes machine learning models 
for predicting single-handed posture choices and posture changes during 
users' interactions with both individual and sequential UIs, with an 
average accuracy of 72.15%. Our explainable model suggests that pose 
choices/changes can be reflected by the LRP relevance of button layouts 
and the button density of UIs. The explainable features of the models 
enable designers to reduce design burdens.


Date:  			Tuesday, 16 February 2021

Time:			10:00am - 12:00noon

Zoom meeting: 
https://hkust.zoom.us/j/96150325377?pwd=bWs5WFRWMmYwYXN4bHBVVlpkOFZOZz09

Committee Members:	Dr. Pan Hui (Supervisor)
 			Dr. Wilfred Ng (Chairperson)
 			Dr. Dimitrios CHATZOPOULOS


**** ALL are Welcome ****