TOPIC #1: Deploy Our Own ChatGPT: Enabling Efficient Inference Serving for LLM (Large Language Model) in the Cloud.

Requirements: basic understanding of deep learning platform, eg. Pytorch, Tensorflow, etc. (select 1 paper below)

TOPIC #2: IceCrusher: Alleviating Cold-Start for Serverless Function

Requirement: hands-on experience with containers, eg. docker, or containerd, or microvm, or runc, etc. or familiar with related technologies; (select 1 paper below)

TOPIC #3: Awesome FaaS-Inference: Optimizing Serverless Computing for Deep Learning Inference

Requirement: Basic knowledge of serverless computing and deep learning inference framework, eg. Pytorch, Tensorflow, etc. (select 1 paper below)