Research and Technology Forum 2023


The Research and Technology Forum 2023 of the Department of Computer Science and Technology aims at providing industry partners a better understanding of perspective research areas of the department and learn about different collaboration channels provided by HKUST. Throughout the event, some ongoing research projects conducted by our faculty would be showcased and participants may also learn more about the department.

Event Details

Date: 31 March 2023 (Friday)
Time: 10:00 am - 1:00 pm HKT (Registration starts at 9:40 am)
Venue: Lam Woo Lecture Theater (LT-B), The Hong Kong University of Science and Technology (location map)
Video: How to get to Lam Woo Lecture Theater (LT-B) from the North Gate of HKUST
Registration: closed

Program Rundown

Time (HKT) Event Rundown
10:00 am - 10:15 am Opening Remarks and Introduction
By Prof. Xiaofang ZHOU, Head of Department of CSE
10:15 am - 10:30 am Presentation by the Office of Knowledge Transfer (OKT), HKUST
By Dr. David Leung, Head (Smart System and Project Development) of OKT
10:30 am - 10:45 am Overview of CSE Labs and Research Highlights
By Prof. Xiaofang ZHOU, Head of Department of CSE
10:45 am - 12:00 nn Research and Project Presentation by CSE Professors
12:00 nn - 1:00 pm Poster Exhibition (served with refreshment)


  • All are welcome. Online registration is required.
  • Free admission.
  • Each registration admits one person only.

Expand all

Title and Abstract -
Research and Project Presentations by CSE Professors


Assistant Professor, Department of CSE

Saving Bitcoin and Ethereum Miners Millions of Dollars

Cryptocurrency miners, be it in proof-of-work or proof-of-stake, have an essential role in ensuring consensus and stability in the system. In exchange for their services, the miners are paid a transaction fee each time they add a transaction to the blockchain. Thus, it is natural from a miner's point-of-view to aim to maximize these fees. In this talk, we show how this problem can be solved using parameterized graph algorithms. Our algorithm is guaranteed to perform optimal mining and saves Bitcoin miners more than 1,000,000 USD per month.

Prof. Dan XU

Assistant Professor, Department of CSE

Joint 2D and 3D Visual Perception and Learning for Holistic Scene Understanding

In this talk, several ongoing research directions in our lab will be briefly presented, including scene depth estimation and reconstruction from supervised or self-supervised settings, joint multi-modal multi-task scene perception, open-world video understanding and generation, and end-to-end deep visual slam. I will introduce the basic setups of these problems, their important research values, and significant outputs we have achieved along these directions.

Prof. Dit-Yan YEUNG

Chair Professor, Department of CSE

Spatiotemporal Trend Forecasting: Research and Applications

Many real-world applications require simultaneous modeling of the spatial and temporal dimensions of a system. For example, spatiotemporal modeling is necessary to model the change of traffic patterns in a highway network or the development of epidemic transmission patterns in a country over a certain period of time. As well as modeling the behavior of events which have already occurred, spatiotemporal trend forecasting goes a step further by making predictions. This talk aims to give a quick overview of our previous and current research in taking a machine learning approach to spatiotemporal trend forecasting, which includes development of novel machine learning models and applying them to certain challenging real-world applications.

Prof. Hao CHEN

Assistant Professor, Department of CSE

Trustworthy Artificial Intelligence for Healthcare

Artificial intelligence (AI), especially deep learning with large-scale annotated datasets, has dramatically advanced the recognition performance in many domains including speech recognition, visual recognition and natural language processing. Despite its breakthroughs in above domains, its application to healthcare such as medical imaging and analysis remains yet to be further explored, where large-scale fully and high-quality annotated datasets are not easily accessible. In this talk, I will share our recent progress on developing trustworthy AI for healthcare, especially in medical imaging and analysis, including label-efficient deep learning methods by leveraging an abundance of weakly-labelled and/or unlabelled datasets, domain generalization, uncertainty quantification, with versatile applications to image enhancement, disease diagnosis, anomaly detection, lesion segmentation, hybrid human-machine collaboration, etc. Challenges and future directions will also be discussed.

Prof. Huamin QU

Chair Professor, Department of CSE

Situated Visualization: Bring Augmented Reality and Virtual Reality to Real Life

Extended reaily (XR), including augmented reality (AR) and virtual reality (VR), are the next mainstream computing platforms. Although they have become increasingly accessible in recent years, they have not been widely adopted in daily use. The XR team of HKUST VisLab is devoted to bringing XR into real life and enriching everyone's XR experience. Our research mission is to design and develop effective XR techniques and user-friendly applications to improve productivity and creativity.

We use human-centered design and data-driven methods to innovate XR technology. We have been developing XR applications to facilitate big data analysis for decision making, resulting in novel visualization techniques, interactions, and authoring tools. Our research has contributed to solving various critical real-world challenges, including social media marketing, urban planning, and education. We are cooperating with local communities as living labs to enrich their living experience using XR. Specifically, we are building the HKUST campuses into XR-enhanced campuses to showcase how people in the future can use XR to break the constraints of time and space to live, study, and play together.

Prof. James KWOK

Professor, Department of CSE

Real-world Applications of Artificial Intelligence and Machine Learning

This talk covers some industry projects related to the use of AI and machine learning. The application domains include FinTech, environment, machine translation, and networking.

Prof. Kai CHEN

Professor, Department of CSE

Towards High-performance Datacenter Networking and Systems

Datacenter has been the main infrastructure for various big data and AI applications. In this talk, I will briefly introduce our efforts in building high-performance datacenter networking and networked systems over the past 10 years, some of which have been deployed in the real-world.

Prof. Ke YI

Professor, Department of CSE

Secure Query Evaluation

In secure multi-party computation (MPC), multiple data owners can jointly compute a function without revealing their own data. Although this is known to be theoretically possible for more than 40 years, practical MPC protocols that scale to megabytes of data have become available only recently. In this talk, I will briefly present our recent efforts in building a secure query processing engine that can evaluate SQL queries under the MPC model, as well as its potential applications.

Prof. Lei CHEN

Chair Professor, Department of CSE

Data Management for Artificial Intelligence (DB4AI)

As deep learning models are successful in many areas, training and deploying deep learning models with higher efficiency and lower cost are in urgent need. The challenges come from the fact that deep learning models keep evolving and the data they use is getting larger. We aim to use DB techniques to solve these challenges automatically, effectively and efficiently. In this talk, I will talk about DB4AI (Database for Artificial Intelligence), which is to develop powerful and efficient data management systems that can support the needs of cutting-edge AI applications.

Prof. Minhao CHENG

Assistant Professor, Department of CSE

Enhancing Machine Learning Security by Post-breach Protection

While machine learning models have proved their effectiveness, some, especially deep neural networks, are highly vulnerable to different attacks. Although a lot of defense methods have been proposed, they are routinely broken soon after release by more powerful attacks. In this talk, I will introduce how to strengthen machine learning security with post-breach protection (digital forensics, autofix, ...) as complementary to existing defenses.

Prof. Qifeng CHEN

Assistant Professor, Department of CSE

Generative AI for 3D Content

We have witnessed the great advancement of AI-generated content (AIGC), such as Stable Diffusion and ChatGPT. Users can use AIGC tools to generate photorealistic or artistic images from given conditions. On the other hand, generative AI for 3D scenes and avatars is still in its early research stage. In this talk, I will share some of my research on 3D synthesis for complex scenes and digital humans, using generative models of GANs and diffusion models. I will talk about some key designs that lead to substantial improvement in scene-level and object-level 3D synthesis as well as automatic 3D avatar generation and editing.

Prof. Shuai WANG

Assistant Professor, Department of CSE

Securing AI Software Systems

Artificial Intelligence (AI) has become a buzzword in recent years, and it is increasingly being used in various software systems, ranging from auto-driving systems and financial sectors to healthcare devices. With the rapid advancement of AI technology, the security and reliability risks associated with AI software systems are also increasing. AI systems may be vulnerable to cyber-attacks, which can compromise the confidentiality, integrity, and availability of critical information. The failure of AI software systems may jeopardize the entire computing infrastructure, resulting in significant financial losses, legal implications, and reputational damages.

It is essential to ensure that AI systems are developed in a secure and reliable manner, and that proper measures are taken to mitigate the associated risks. In this talk, Dr. Shuai WANG will briefly introduce their recent efforts in securing AI software systems. He will present various techniques and strategies that have been applied by his research group to uncover AI software pitfalls and enhance their security. Techniques include AI software testing, system-level exploitation, privacy stealing, compiler-level security monitoring, and trusted computing systems.

Poster by CSE Postgraduate Students

The posters, prepared by CSE PhD and MPhil students, will be exhibited outside LT-B and the authors will be present to answer questions and discuss their posters during 12:00-13:00. Refreshments will be served during the poster session.

Expand all

Soroush FAROKHNIA - MPhil(CSE) student

Title: Alleviating High Gas Costs by Secure and Trustless Off-chain Execution of Smart Contracts

Author(s): Soroush Farokhnia, Amir Kafshdar Goharshady


Smart contracts are programs that are executed on the blockchain and can hold, manage and transfer assets in the form of cryptocurrencies. The contract's execution is then performed on-chain and is subject to consensus, i.e. every node on the blockchain network has to run the function calls and keep track of their side-effects including updates to the balances and contract's storage. The notion of gas is introduced in most programmable blockchains, which prevents DoS attacks from malicious parties who might try to slow down the network by performing time-consuming and resource-heavy computations. While the gas idea has largely succeeded in its goal of avoiding DoS attacks, the resulting fees are extremely high. For example, in June-September 2022, on Ethereum alone, there has been an average total gas usage of 2,706.8 ETH ≈ 3,938,749 USD per day. We propose a protocol for alleviating these costs by moving most of the computation off-chain while preserving enough data on-chain to guarantee an implicit consensus about the contract state and ownership of funds in case of dishonest parties. We perform extensive experiments over 3,330 real-world Solidity contracts that were involved in 327,132 transactions in June-September 2022 on Ethereum and show that our approach reduces their gas usage by 40.09 percent, which amounts to a whopping 442,651 USD.

Chun Kit LAM - PhD(CSE) student

Title: Being Lazy When it Counts: Practical Functional Programming with Constant-Time Memory Management

Author(s): Chun Kit LAM, Lionel PARREAUX


Functional programming (FP) lets users focus on the business logic of their applications by providing them with high-level and composable abstractions. However, FP is traditionally implemented based on stop-the-world garbage collection, which may introduce long and unpredictable pauses during program execution, limiting the applicability of the functional paradigm. We propose the first approach to functional programming with constant-time memory management, meaning that allocation and deallocation take only a bounded and predictable amount of time. We use an old and mostly forgotten variant of reference counting where reference counts are decreased lazily, which does not leaking memory when we switch to uniform allocations (i.e., "one size fits all"), which is suitable for FP. More specifically, we modify the Koka programming language to support a hybrid form of reference counting whereby statically delimited single-threaded parts of an application are guaranteed to use constant-time memory management, while parts that need to perform large allocations and concurrent operations are still allowed but clearly delineated using Koka's effect system. We show that our approach is eminently practical, as its performance is on par with existing state-of-the-art implementations of reference counting and garbage collection for FP, sometimes even outperforming them. We believe this opens the door to many new industrial applications of the functional paradigm, such as its use in real-time embedded software; indeed, the development of a high-level language describing latency-critical quantum physics experiments was one of the original use cases that prompted this work.

Binyang DAI - PhD(CSE) student

Title: Continuous SQL Query Evaluation Using Flink

Author(s): Binyang Dai, Qichen Wang, Ke Yi


Continuous SQL query evaluation involves the perpetual data flow through a running SQL query that incrementally evaluates its results and updates the current internal state. However, the standard approach of constructing a query plan and materializing intermediate views can incur high polynomial costs in space and time, primarily due to the join operator in the execution plan.

To address this issue, we propose a new change propagation framework for continuous SQL query evaluation that avoids joins, thereby avoiding the polynomial blowup. Our framework supports constant delay enumeration for both deltas and full query results, as with other standard frameworks.

We implemented a prototype of our system on top of Flink, an open-source stream processing framework for distributed, high-performance, and always-available stream applications. Our approach can be easily integrated into various standard change propagation frameworks. Our experimental results demonstrate that our system significantly outperforms other systems regarding space, time, and latency.

Wei DONG - PhD(CSE) student

Title: DP-SQL: A Differentially Private SQL Engine

Author(s): Wei Dong, Ke Yi


Differential privacy (DP) has garnered significant attention from both academia and industry due to its potential in offering robust privacy protection for individual data during analysis. With the increasing volume of sensitive information being collected by organizations and analyzed through SQL queries, the development of a general-purpose query engine that is capable of supporting a broad range of SQLs while maintaining DP has become the holy grail in privacy-preserving query release. In a relational database, there are two DP policies: tuple-DP, which protects the privacy of single tuples in each relation, and user-DP, which protects all data belonging to each user via foreign keys. Under each policy, we have designed DP mechanisms for answering a broad class of queries consisting of the selection, projection, aggregation, and join operators. Five papers are published in tier-1 computer science conferences. Finally, based on the algorithms, we have built a DP-SQL system that significantly outperforms existing systems in terms of both utility and efficiency.

Hanrong YE - PhD(CSE) student

Title: Inverted Pyramid Multi-task Transformer for Dense Scene Understanding

Author(s): Hanrong Ye and Dan Xu


Multi-task visual scene understanding typically requires joint learning and reasoning on a bunch of correlated tasks, which is highly important in computer vision and has a wide range of application scenarios such as autonomous driving, robotics, and augmented or virtual reality (AR/VR). Most existing works encounter a severe limitation of modeling in the locality due to the heavy utilization of convolution operations, while learning interactions and inference in a global spatial-position and multi-task context is critical for this problem. In this paper, we propose a novel end-to-end Inverted Pyramid multi-task Transformer (InvPT) to perform simultaneous modeling of spatial positions and multiple tasks in a unified framework. To the best of our knowledge, this is the first work that explores designing a transformer structure for multi-task dense prediction for scene understanding. Demo and codes are available.

Haoxuan CHE - PhD(CSE) student

Title: Learning Robust Representation for Joint Grading of Ophthalmic Diseases via Adaptive Curriculum and Feature Disentanglement

Author(s): Haoxuan Che, Haibo Jin, Hao Chen


Diabetic retinopathy (DR) and diabetic macular edema (DME) are leading causes of permanent blindness worldwide. Designing an automatic grading system with good generalization ability for DR and DME is vital in clinical practice. However, prior works either grade DR or DME independently, without considering internal correlations between them, or grade them jointly by shared feature representation, yet ignoring potential generalization issues caused by difficult samples and data bias. Aiming to address these problems, we propose a framework for joint grading with the dynamic difficulty-aware weighted loss (DAW) and the dual-stream disentangled learning architecture (DETACH). Inspired by curriculum learning, DAW learns from simple samples to difficult samples dynamically via measuring difficulty adaptively. DETACH separates features of grading tasks to avoid potential emphasis on the bias. With the addition of DAW and DETACH, the model learns robust disentangled feature representations to explore internal correlations between DR and DME and achieve better grading performance. Experiments on three benchmarks show the effectiveness and robustness of our framework under both the intra-dataset and cross-dataset tests.

Zhuo CAI - MPhil(CSE) student

Title: Proof-of-Stake with Game-theoretic Randomness

Author(s): Zhuo CAI, Amir GOHARSHADY


Proof-of-Stake blockchain protocols rely on a distributed random beacon to select the next miner that is allowed to add a block to the chain. Each party's likelihood to be selected is in proportion to their stake in the cryptocurrency.

Current random beacons used in PoS protocols have two fundamental limitations: either (i) they rely on pseudo-randomness, e.g. assuming that the output of a hash function is uniform, which is an unproven assumption, or (ii) they generate their randomness using a distributed protocol in which several participants are required to submit random numbers which are then used in the generation of a final random result. However, in this case, there is no guarantee that the numbers provided by the parties are truly random and there is no incentive for the parties to honestly generate uniform randomness.

In this work, we provide a protocol that generates trustless and unbiased randomness for PoS that overcomes the above limitations. We provide a game-theoretic guarantee showing that it is in everyone's best interest to submit truly uniform random numbers. Hence, our approach is the first to incentivize honest behavior instead of just assuming it.

Qiyao LUO - PhD(CSE) student

Title: Secure Multi-party Query Evaluation

Author(s): Qiyao Luo, Ke Yi


Secure multi-party query evaluation has gained steam in recent years due to the raising concerns of security and privacy. The technique enables parties to compute SQL queries securely and privately without revealing their data to each other. This is crucial in cases where parties cannot trust each other with their sensitive data, such as in medical research, financial analysis, or government intelligence gathering.

Our work has developed an efficient query processing engine for acyclic join with \(\tilde{O}(\text{IN} + \text{OUT})\) costs and constant rounds, achieving the same cost as plain-text optimal algorithms with a polylog factor. It has improved on previous methods by eliminating the expensive secure sorting operation. Nonetheless, current MPC techniques still have high computational and communication costs. To address this, we investigate the problem of secure random sampling and propose an approximate query processing MPC system. Typically, generating a sample obliviously requires a full scan of the entire data, resulting in a cost of \(\Omega(\text{IN})\). Our proposed system employs a two-stage approach that significantly improves the performance of the query processing system. In the offline stage, we perform the costly operations (i.e., generating samples); and in the online stage, we can quickly respond to a given query with pre-processed data. The innovation has reduced the amortized cost per query and the response time to \(\tilde{O}(s)\), proportional to the sample size.

Andong FAN - MPhil(CSE) student

Title: Simple Extensible Programming through Precisely-Typed Open Recursion

Author(s): Andong Fan


In modular programming, the famous Expression Problem describes the dilemma of modular extension for both datatypes and their operations in object-oriented and functional programming. Recently, Parreaux et al. propose a novel language called MLscript which features classes and traits, instance matching, union and intersection types, and principal type inference. In this poster, we show that a small extension to MLscript gives a simple solution to the Expression Problem through precisely-typed open recursion.

Giovanna KOBUS CONRADO - PhD(CSE) student

Title: The Bounded Pathwidth of Control-flow Graphs

Author(s): Giovanna Kobus Conrado, Amir Kafshdar Goharshady, Chun Kit Lam


Pathwidth and treewidth are standard and well-studied graph sparsity parameters which intuitively model the degree to which a given graph resembles a path or a tree, respectively. It is well-known that the control-flow graphs of structured goto-free programs have a tree-like shape and bounded treewidth. This fact has been exploited to design considerably more efficient algorithms for a wide variety of static analysis and compiler optimization problems, such as register allocation, μ-calculus model-checking and parity games, data-flow analysis, cache management, and liftetime-optimal redundancy elimination. However, there is no bound in the literature for the pathwidth of programs, except the general inequality that pathwidth is Ω(log n) times treewidth, where n is the number of vertices of the graph.

In this work, we prove that control-flow graphs of structured programs have bounded pathwidth and provide a linear-time algorithm to obtain a path decomposition of small width. Specifically, we establish a bound of 2d on the pathwidth of programs with nesting depth d. Since real-world programs have small nesting depth, they also have bounded pathwidth. This is significant for a number of reasons: (i) pathwidth is a strictly stronger parameter than treewidth, i.e. any graph with bounded pathwidth has bounded treewidth, but the converse does not hold; (ii) any algorithm that is designed with treewidth in mind can be applied to bounded-pathwidth graphs with no change; (iii) there are problems that are fixed-parameter tractable with respect to pathwidth but not treewidth; (iv) verification algorithms that are designed based on treewidth would become significantly faster when using pathwidth as the parameter; and (v) it is easier to design algorithms based on bounded pathwidth since one does not have to consider the often-challenging case of merge nodes in treewidth-based dynamic programming. Thus, we invite the static analysis and compiler optimization communities to adopt pathwidth as their parameter of choice instead of relying on treewidth-based algorithms. Intuitively, control-flow graphs are not only tree-like, but also path-like and one can obtain simpler and more scalable algorithms by relying on path-likeness instead of tree-likeness.

As a motivating example, we provide a simpler and more efficient algorithm for spill-free register allocation using bounded pathwidth instead of treewidth. Our algorithm reduces the runtime from O(n.r^(2tw.r+2.r)) to O(^(pw.r+r+1)), where n is the number of lines of code, r is the number of registers, tw is the treewidth and pw is the pathwidth. We provide extensive experimental results showing that our approach is applicable to a wide variety of embedded benchmarks from SDCC and obtains runtime improvements of 2-3 orders of magnitude. As such, the benefits of using pathwidth are not limited to the theoretical side and simplicity in algorithm design, but are also apparent in practice.

Luyu CHENG - PhD(CSE) student

Title: The Ultimate Conditional Syntax

Author(s): Luyu Cheng, Lionel Parreaux


ML-language dialects and related typically support expressive pattern-matching syntaxes which allow programmers to write concise, expressive, and type-safe code to manipulate algebraic data types. Many features have been proposed to enhance the expressiveness of these pattern-matching syntaxes, such as pattern bindings, pattern alternatives (a.k.a. disjunction), pattern conjunction, view patterns, pattern guards, 'if-let' patterns, multi-way if-expressions, etc.

We propose a new framework for expressing pattern-matching code in a way that is both more expressive and (we argue) more readable than previous alternatives. Our syntax subsumes many proposed extensions to ML pattern matching by allowing parallel and nested matches interleaved with computations and intermediate bindings. This is achieved through a form of nested multi-way if-expressions, an expression-splitting mechanism to factor common conditional prefixes, and a technique we call conditional pattern flowing.

We present many examples of this new syntax in the setting of MLscript, a new ML-family programming language that is being developed at the Hong Kong University of Science and Technology.

Xi ZHAO - PhD(CSE) student

Title: Towards Efficient Index Construction and Approximate Nearest Neighbor Search in High-Dimensional Spaces

Author(s): Xi Zhao, Yao Tian, Kai Huang, Bolong Zheng, Xiaofang Zhou


The approximate nearest neighbor (ANN) search in high-dimensional spaces is a fundamental but computationally very expensive problem. Many methods have been designed for solving the ANN problem, such as LSH-based methods and graph-based methods. The LSH-based methods can be costly to reach high query quality due to the hash-boundary issues, while the graph-based methods can achieve better query performance by greedy expansion in an approximate proximity graph (APG). However, the construction cost of these APGs can be one or two orders of magnitude higher than that for building hash-based indexes. We propose a novel approach named LSH-APG to build APGs and facilitate fast ANN search using a lightweight LSH framework. LSH-APG builds an APG via consecutively inserting points based on their nearest neighbor relationship with an efficient and accurate LSH-based search strategy. Its maintenance cost and query cost for a point is proven to be less affected by dataset cardinality. Extensive experiments on real-world datasets demonstrate that LSH-APG incurs significantly less construction cost but achieves better query performance than existing graph-based methods.

Yongkang ZHANG - PhD(CSE) student

Title: Workload Consolidation in Alibaba Clusters: The Good, the Bad, and the Ugly

Author(s): Yongkang Zhang, Yinghao Yu, Wei Wang, Qiukai Chen, Jie Wu, Zuowei Zhang, Jiang Zhong, Tianchen Ding, Qizhen Weng, Lingyun Yang, Cheng Wang, Jian He, Guodong Yang, and Liping Zhang


Web companies typically run latency-critical long-running services and resource-intensive, throughput-hungry batch jobs in a shared cluster for improved utilization and reduced cost. Despite many recent studies on workload consolidation, the production practice remains largely unknown. This paper describes our efforts to efficiently consolidate the two types of workloads in Alibaba clusters to support the company's e-commerce businesses.

At the cluster level, the host and GPU memory are the bottleneck resources that limit the scale of consolidation. Our system proactively reclaims the idle host memory pages of service jobs and dynamically relinquishes their unused host and GPU memory following the predictable diurnal pattern of user traffic, a technique termed tidal scaling. Our system further performs node-level micro-management to ensure that the increased workload consolidation does not result in harmful resource contention. We briefly share our experience in handling the surging traffic with flash-crowd customers during the seasonal shopping festivals (e.g., November 11) using these "good" practices. We also discuss the limitations of our current solution (the "bad") and some practical engineering constraints (the "ugly") that make many prior research solutions inapplicable to our system.

Possible Ways of Collaboration with Industry Partners

  1. Joint research lab
    It is a long-term collaboration relation where companies set up a research funding pool at the university.
  2. Internship
    Our students are encouraged to take internship during their studies. The internship can be as short as six weeks and as long as one year. We maintain a database enlisting the internship opportunities offered by companies. The database is open to our students.
  3. Final year project
    Our students are required to complete a project applying what they have learnt in their final year. Companies are welcome to let us know possible topics that they are interested in sponsoring by each March. Selected topics will be open for enrollment by students as their final year projects.
  4. Professional and development course
    All students who have enrolled to our program are required to take a seminar course, where companies are welcome to give a talk introducing the various opportunities and career path in a profession for IT graduates.
  5. Research and Technology Forum
    We will organize a Research and Technology Forum, inviting companies to join a series of 5-min research highlights by our faculty.
  6. Innovation and Technology Fund (ITF)
    Hong Kong government offering funding to facilitate collaboration between universities and industry. The two popular schemes are: Innovation and Technology Support Programme (ITSP) and Partnership Research Program (PRP). Besides tax relief, companies are eligible to rebate 40% of their cash sponsorship in these projects from the government via the Research and Development Cash Rebate Scheme (CRS).

Strategic Partner

HKSTP Startups Alumni Association


Ms. Sylvia Mak ()