Skip to content

Commit

Permalink
Merge branch 'v3' into pipeline
Browse files Browse the repository at this point in the history
  • Loading branch information
Yang Gu authored Aug 9, 2024
2 parents 62eaa8e + 83f5718 commit c0b4a01
Show file tree
Hide file tree
Showing 36 changed files with 817 additions and 910 deletions.
8 changes: 8 additions & 0 deletions .prettierignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Ignore artifacts:
.github
dist
docs
examples
scripts
types
*.md
1 change: 1 addition & 0 deletions .prettierrc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{}
25 changes: 13 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,14 @@
</p>

<p align="center">
<a href="https://www.npmjs.com/package/@xenova/transformers">
<img alt="NPM" src="https://img.shields.io/npm/v/@xenova/transformers">
<a href="https://www.npmjs.com/package/@huggingface/transformers">
<img alt="NPM" src="https://img.shields.io/npm/v/@huggingface/transformers">
</a>
<a href="https://www.npmjs.com/package/@xenova/transformers">
<img alt="NPM Downloads" src="https://img.shields.io/npm/dw/@xenova/transformers">
<a href="https://www.npmjs.com/package/@huggingface/transformers">
<img alt="NPM Downloads" src="https://img.shields.io/npm/dw/@huggingface/transformers">
</a>
<a href="https://www.jsdelivr.com/package/npm/@xenova/transformers">
<img alt="jsDelivr Hits" src="https://img.shields.io/jsdelivr/npm/hw/@xenova/transformers">
<a href="https://www.jsdelivr.com/package/npm/@huggingface/transformers">
<img alt="jsDelivr Hits" src="https://img.shields.io/jsdelivr/npm/hw/@huggingface/transformers">
</a>
<a href="https://github.com/xenova/transformers.js/blob/main/LICENSE">
<img alt="License" src="https://img.shields.io/github/license/xenova/transformers.js?color=blue">
Expand Down Expand Up @@ -69,7 +69,7 @@ out = pipe('I love transformers!')
<td>

```javascript
import { pipeline } from '@xenova/transformers';
import { pipeline } from '@huggingface/transformers';

// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline('sentiment-analysis');
Expand All @@ -93,15 +93,15 @@ let pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-u
## Installation


To install via [NPM](https://www.npmjs.com/package/@xenova/transformers), run:
To install via [NPM](https://www.npmjs.com/package/@huggingface/transformers), run:
```bash
npm i @xenova/transformers
npm i @huggingface/transformers
```

Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using [ES Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), you can import the library with:
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected].0';
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/[email protected].3';
</script>
```

Expand Down Expand Up @@ -134,12 +134,12 @@ Check out the Transformers.js [template](https://huggingface.co/new-space?templa



By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/[email protected].0/dist/), which should work out-of-the-box. You can customize this as follows:
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/[email protected].3/dist/), which should work out-of-the-box. You can customize this as follows:

### Settings

```javascript
import { env } from '@xenova/transformers';
import { env } from '@huggingface/transformers';

// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';
Expand Down Expand Up @@ -302,6 +302,7 @@ You can refine your search by selecting the task you're interested in (e.g., [te
1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
1. **Florence2** (from Microsoft) released with the paper [Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks](https://arxiv.org/abs/2311.06242) by Bin Xiao, Haiping Wu, Weijian Xu, Xiyang Dai, Houdong Hu, Yumao Lu, Michael Zeng, Ce Liu, Lu Yuan.
1. **[Gemma](https://huggingface.co/docs/transformers/main/model_doc/gemma)** (from Google) released with the paper [Gemma: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/gemma-open-models/) by the Gemma Google team.
1. **[Gemma2](https://huggingface.co/docs/transformers/main/model_doc/gemma2)** (from Google) released with the paper [Gemma2: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/google-gemma-2/) by the Gemma Google team.
1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
Expand Down
12 changes: 6 additions & 6 deletions docs/scripts/build_readme.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,14 @@
</p>
<p align="center">
<a href="https://www.npmjs.com/package/@xenova/transformers">
<img alt="NPM" src="https://img.shields.io/npm/v/@xenova/transformers">
<a href="https://www.npmjs.com/package/@huggingface/transformers">
<img alt="NPM" src="https://img.shields.io/npm/v/@huggingface/transformers">
</a>
<a href="https://www.npmjs.com/package/@xenova/transformers">
<img alt="NPM Downloads" src="https://img.shields.io/npm/dw/@xenova/transformers">
<a href="https://www.npmjs.com/package/@huggingface/transformers">
<img alt="NPM Downloads" src="https://img.shields.io/npm/dw/@huggingface/transformers">
</a>
<a href="https://www.jsdelivr.com/package/npm/@xenova/transformers">
<img alt="jsDelivr Hits" src="https://img.shields.io/jsdelivr/npm/hw/@xenova/transformers">
<a href="https://www.jsdelivr.com/package/npm/@huggingface/transformers">
<img alt="jsDelivr Hits" src="https://img.shields.io/jsdelivr/npm/hw/@huggingface/transformers">
</a>
<a href="https://github.com/xenova/transformers.js/blob/main/LICENSE">
<img alt="License" src="https://img.shields.io/github/license/xenova/transformers.js?color=blue">
Expand Down
2 changes: 1 addition & 1 deletion docs/snippets/1_quick-tour.snippet
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ out = pipe('I love transformers!')
<td>

```javascript
import { pipeline } from '@xenova/transformers';
import { pipeline } from '@huggingface/transformers';
// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline('sentiment-analysis');
Expand Down
6 changes: 3 additions & 3 deletions docs/snippets/2_installation.snippet
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@

To install via [NPM](https://www.npmjs.com/package/@xenova/transformers), run:
To install via [NPM](https://www.npmjs.com/package/@huggingface/transformers), run:
```bash
npm i @xenova/transformers
npm i @huggingface/transformers
```

Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using [ES Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), you can import the library with:
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected].0';
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/[email protected].3';
</script>
```
4 changes: 2 additions & 2 deletions docs/snippets/4_custom-usage.snippet
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@


By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/[email protected].0/dist/), which should work out-of-the-box. You can customize this as follows:
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/[email protected].3/dist/), which should work out-of-the-box. You can customize this as follows:

### Settings

```javascript
import { env } from '@xenova/transformers';
import { env } from '@huggingface/transformers';
// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';
Expand Down
1 change: 1 addition & 0 deletions docs/snippets/6_supported-models.snippet
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
1. **Florence2** (from Microsoft) released with the paper [Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks](https://arxiv.org/abs/2311.06242) by Bin Xiao, Haiping Wu, Weijian Xu, Xiyang Dai, Houdong Hu, Yumao Lu, Michael Zeng, Ce Liu, Lu Yuan.
1. **[Gemma](https://huggingface.co/docs/transformers/main/model_doc/gemma)** (from Google) released with the paper [Gemma: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/gemma-open-models/) by the Gemma Google team.
1. **[Gemma2](https://huggingface.co/docs/transformers/main/model_doc/gemma2)** (from Google) released with the paper [Gemma2: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/google-gemma-2/) by the Gemma Google team.
1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
Expand Down
6 changes: 3 additions & 3 deletions docs/source/guides/node-audio-processing.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@ This tutorial will be written as an ES module, but you can easily adapt it to us

## Getting started

Let's start by creating a new Node.js project and installing Transformers.js via [NPM](https://www.npmjs.com/package/@xenova/transformers):
Let's start by creating a new Node.js project and installing Transformers.js via [NPM](https://www.npmjs.com/package/@huggingface/transformers):

```bash
npm init -y
npm i @xenova/transformers
npm i @huggingface/transformers
```

<Tip>
Expand All @@ -52,7 +52,7 @@ npm i wavefile
Start by creating a new file called `index.js`, which will be the entry point for our application. Let's also import the necessary modules:

```js
import { pipeline } from '@xenova/transformers';
import { pipeline } from '@huggingface/transformers';
import wavefile from 'wavefile';
```

Expand Down
2 changes: 1 addition & 1 deletion docs/source/guides/private.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Transformers.js will attach an Authorization header to requests made to the Hugg
One way to do this is to call your program with the environment variable set. For example, let's say you have a file called `llama.js` with the following code:

```js
import { AutoTokenizer } from '@xenova/transformers';
import { AutoTokenizer } from '@huggingface/transformers';

// Load tokenizer for a gated repository.
const tokenizer = await AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf');
Expand Down
2 changes: 1 addition & 1 deletion docs/source/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ For the full list of available tasks/pipelines, check out [this table](#availabl
Start by creating an instance of `pipeline()` and specifying a task you want to use it for. For example, to create a sentiment analysis pipeline, you can do:

```javascript
import { pipeline } from '@xenova/transformers';
import { pipeline } from '@huggingface/transformers';

let classifier = await pipeline('sentiment-analysis');
```
Expand Down
12 changes: 6 additions & 6 deletions docs/source/tutorials/next.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,11 @@ On installation, you'll see various prompts. For this demo, we'll be selecting t

### Step 2: Install and configure Transformers.js

You can install Transformers.js from [NPM](https://www.npmjs.com/package/@xenova/transformers) with the following command:
You can install Transformers.js from [NPM](https://www.npmjs.com/package/@huggingface/transformers) with the following command:


```bash
npm i @xenova/transformers
npm i @huggingface/transformers
```

We also need to update the `next.config.js` file to ignore node-specific modules when bundling for the browser:
Expand Down Expand Up @@ -76,7 +76,7 @@ module.exports = nextConfig
Next, we'll create a new [Web Worker](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers) script where we'll place all ML-related code. This is to ensure that the main thread is not blocked while the model is loading and performing inference. For this application, we'll be using [`Xenova/distilbert-base-uncased-finetuned-sst-2-english`](https://huggingface.co/Xenova/distilbert-base-uncased-finetuned-sst-2-english), a ~67M parameter model finetuned on the [Stanford Sentiment Treebank](https://huggingface.co/datasets/sst) dataset. Add the following code to `./src/app/worker.js`:

```js
import { pipeline, env } from "@xenova/transformers";
import { pipeline, env } from "@huggingface/transformers";

// Skip local model check
env.allowLocalModels = false;
Expand Down Expand Up @@ -264,11 +264,11 @@ On installation, you'll see various prompts. For this demo, we'll be selecting t

### Step 2: Install and configure Transformers.js

You can install Transformers.js from [NPM](https://www.npmjs.com/package/@xenova/transformers) with the following command:
You can install Transformers.js from [NPM](https://www.npmjs.com/package/@huggingface/transformers) with the following command:


```bash
npm i @xenova/transformers
npm i @huggingface/transformers
```

We also need to update the `next.config.js` file to prevent Webpack from bundling certain packages:
Expand All @@ -294,7 +294,7 @@ Next, let's set up our Route Handler. We can do this by creating two files in a
1. `pipeline.js` - to handle the construction of our pipeline.
```js
import { pipeline } from "@xenova/transformers";
import { pipeline } from "@huggingface/transformers";
// Use the Singleton pattern to enable lazy construction of the pipeline.
// NOTE: We wrap the class in a function to prevent code duplication (see below).
Expand Down
10 changes: 5 additions & 5 deletions docs/source/tutorials/node.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,11 @@ Although you can always use the [Python library](https://github.com/huggingface/

## Getting started

Let's start by creating a new Node.js project and installing Transformers.js via [NPM](https://www.npmjs.com/package/@xenova/transformers):
Let's start by creating a new Node.js project and installing Transformers.js via [NPM](https://www.npmjs.com/package/@huggingface/transformers):

```bash
npm init -y
npm i @xenova/transformers
npm i @huggingface/transformers
```

Next, create a new file called `app.js`, which will be the entry point for our application. Depending on whether you're using [ECMAScript modules](#ecmascript-modules-esm) or [CommonJS](#commonjs), you will need to do some things differently (see below).
Expand Down Expand Up @@ -66,7 +66,7 @@ import url from 'url';
Following that, let's import Transformers.js and define the `MyClassificationPipeline` class.

```javascript
import { pipeline, env } from '@xenova/transformers';
import { pipeline, env } from '@huggingface/transformers';

class MyClassificationPipeline {
static task = 'text-classification';
Expand Down Expand Up @@ -107,7 +107,7 @@ class MyClassificationPipeline {
static async getInstance(progress_callback = null) {
if (this.instance === null) {
// Dynamically import the Transformers.js library
let { pipeline, env } = await import('@xenova/transformers');
let { pipeline, env } = await import('@huggingface/transformers');

// NOTE: Uncomment this to change the cache directory
// env.cacheDir = './.cache';
Expand Down Expand Up @@ -195,7 +195,7 @@ Great! We've successfully created a basic HTTP server that uses Transformers.js

### Model caching

By default, the first time you run the application, it will download the model files and cache them on your file system (in `./node_modules/@xenova/transformers/.cache/`). All subsequent requests will then use this model. You can change the location of the cache by setting `env.cacheDir`. For example, to cache the model in the `.cache` directory in the current working directory, you can add:
By default, the first time you run the application, it will download the model files and cache them on your file system (in `./node_modules/@huggingface/transformers/.cache/`). All subsequent requests will then use this model. You can change the location of the cache by setting `env.cacheDir`. For example, to cache the model in the `.cache` directory in the current working directory, you can add:

```javascript
env.cacheDir = './.cache';
Expand Down
6 changes: 3 additions & 3 deletions docs/source/tutorials/react.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,10 +44,10 @@ You can stop the development server by pressing <kbd>Ctrl</kbd> + <kbd>C</kbd> i

## Step 2: Install and configure Transformers.js

Now we get to the fun part: adding machine learning to our application! First, install Transformers.js from [NPM](https://www.npmjs.com/package/@xenova/transformers) with the following command:
Now we get to the fun part: adding machine learning to our application! First, install Transformers.js from [NPM](https://www.npmjs.com/package/@huggingface/transformers) with the following command:

```bash
npm install @xenova/transformers
npm install @huggingface/transformers
```

For this application, we will use the [Xenova/nllb-200-distilled-600M](https://huggingface.co/Xenova/nllb-200-distilled-600M) model, which can perform multilingual translation among 200 languages. Before we start, there are 2 things we need to take note of:
Expand All @@ -58,7 +58,7 @@ We can achieve both of these goals by using a [Web Worker](https://developer.moz

1. Create a file called `worker.js` in the `src` directory. This script will do all the heavy-lifing for us, including loading and running of the translation pipeline. To ensure the model is only loaded once, we will create the `MyTranslationPipeline` class which use the [singleton pattern](https://en.wikipedia.org/wiki/Singleton_pattern) to lazily create a single instance of the pipeline when `getInstance` is first called, and use this pipeline for all subsequent calls:
```javascript
import { pipeline } from '@xenova/transformers';
import { pipeline } from '@huggingface/transformers';

class MyTranslationPipeline {
static task = 'translation';
Expand Down
Loading

0 comments on commit c0b4a01

Please sign in to comment.