Prompt Tuning for Sequence Classification

In my previous blog post Zero-Shot Text Classification with pretrained LLM, I used Qwen2.5-0.5B-Instruct for sentiment analysis without any training. With some tweet on the prompts, we can see an improvement of accuracy from 77.5% to 82.5%. We might be able to squeeze the performance even more with prompt engineering, but it is inefficient as most of the time we don't know why one word is better than another in the prompts. Instead of prompt engineering, we can do prompt tuning with some labelled data, which is one of the parameter-efficent ways to fine tune a LLM model. Its main idea is to prepend some tunable tokens to some task specific prompt while freezing the LLM model. We then train the embeddings of the prepended tokens on the labelled data so that the learned tokens can align the task specific prompt better to the task.

Read More

Sequence Classification with Apple MLX

MLX is an array framework for machine learning on Apple silicon. The biggest advantage of the framework is the compatibility with the unified memory on Apple so that operations on MLX arrays can be performed on any of the supported device types without transferring data. It makes MLX a strong candidate when it comes to inferencing and even training a large model on Apple silicon. There are examples specifically designed for LLM, with a focus on text completion. As of today, there are few examples on other LLM tasks such as sequence classification for MLX since the framework is relatively new. I will provide an example to do classification inference with MLX, replicating what I did in my previous article Zero-Shot Text Classification with pretrained LLM.

Read More

Zero-Shot Text Classification with pretrained LLM

According to this article,

Zero-shot text classification is a task in natural language processing where a model is trained on a set of labeled examples but is then able to classify new examples from previously unseen classes.

Simply put, zero-shot text classification is to use preexisting models on classification tasks that the models are not trained upon. Large Language Models backed by attention have a lot of great applications, such as summarization, chatbot, code completion and etc. It aslo gives zero-shot text classification a huge potential since most LLMs are pretrained on tremendous data which cover most common use case already. LLMs with strong reasoning capability such as deepseek can even perform well on unseen data. In this article, I want to discuss some pratical ways to use pretrained LLMs to do zero-shot classification using 🤗 Transformers.

Read More

PySpark Estimator and Transformer

PySpark's pipeline is a powerful tool that encapsulates machine learning processes. We can build rather complicated pipelines to our needs using the existing estimators/transformers come with the PySpark's library, until we can't. In this article, I will show how we can build custom estimators and transformers to make the pipeline even more powerful.

Imagine that we want to build a model with some high cardinality categorical features. Upon inspection, we find that only some most frequent values are useful and we decide to keep those frequent values and mask other values "OTHERS". We will implement CardinalityReducer that will keep only most frequent N values in a categorical column (or a column of string type). We will implement it in a way so that it can fit training sets together with other components in a pipeline.

Read More

Fixing an issue in saving/loading BERT models

Recently I came across an issue in saving/loading BERT models with TensorFlow. The BERT models are provided by the Transformers library, and I used Tensorflow backend. When saving with model.save(path) then loading with tf.keras.models.load_model(path), it gave the following TypeError or ValueError:

TypeError/ValueError: The two structures don't have the same nested structure.

The article is to document several ways to solve the issue.

Read More

Simple SVD with Bias for Netflix Prize

In my linear algebra class this summer, I used the Netflix Prize challenge as a pratical example for an application of singular value decomposition (SVD). To be more precise, I explained the term \(p_u^Tq_i\) in the simple SVD with bias model:

$$\hat{r}_{ui} = \mu + b_u + b_i + p_u^Tq_i.$$


The above model can be found in section 2.1 in this progress paper of the winning team. In this note, I will explain this model and give an implementation in Python. A C implementation of the moddel can be found in my GitHub repository here: https://github.com/wormtooth/netflix_svd.

Read More

Clustering Weibo Tags

I started a projected last October to collect Weibo's top search data (微博热搜榜) hourly. Together with the keywords or tags (关键词), most recent related weibos (or tweets) are collected as well. The result is save to a JSON file, with the format explained in this page.

In this post, I would like to explore this data set and try to cluster tags. To be more precise, multiple tags can be used to refer to a same event, and these different tags are related and even share the same meaning. The task is to group similar tags together based on the data collected.

Read More

Principal Component Analysis

This is the note I used as an example of applications in Linear Algebra I lectured at Purdue University. It is slightly modified so that it is more or less self contained.

Principal Component Analysis (PCA) is a linear algebra technique for data analysis, which is an application of eigenvalues and eigenvectors. PCA can be used in

  1. exploratory data analysis (visualizing the data)
  2. features reduction

We will learn the basic idea of PCA and see its applications in handwritten-digits recognition, eigenfaces and etc.

Read More

Linear Regression

This is the note I used as an example of applications in Linear Algebra I lectured at Purdue University. It is slightly modified so that it is more or less self contained.

Starting from least-squares solution, we are going to give an introductory exploration on (linear) regression in this note.

import numpy as np
import sklearn.linear_model
import matplotlib.pyplot as plt
from IPython.display import set_matplotlib_formats

plt.rcParams["figure.figsize"] = (8, 6)
set_matplotlib_formats('png', 'pdf')

Least-squares solution

Let \(A\) be an \(m \times n\) matrix, and \(B\) be a vector in \(\mathbb{R}^m\). A least-squares solution to a linear system \(Ax = B\) is an \(\hat{x}\) such that \(|A \hat{x} - B| \le |A x - B|\) for all \(x\). Here, \(|x|\) is the length of the vector \(x\). If the system \(Ax = B\) is consistent, then a least-squares solution is just a solution.

Read More

LeetCode Contest 209

这次只能做出前面三题,而且第三题用时过长,导致这次排名只有505。

第一题Special Array With X Elements Greater Than or Equal X

给定一个数组,找出 x 使得恰有 x 个数不小于 x

暴力枚举所有可能的 x

class Solution:
    def specialArray(self, nums: List[int]) -> int:
        for n in range(len(nums) + 1):
            if sum(1 for v in nums if v >= n) == n:
                return n
        return -1

Read More