Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Standardize HF_ACCESS_TOKEN -> HF_TOKEN #391

Merged
merged 2 commits into from
Dec 5, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/workflows/lint-and-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,12 +50,12 @@ jobs:
- name: Test
run: VCR_MODE=playback pnpm --filter ...[${{ steps.since.outputs.SINCE }}] test
env:
HF_ACCESS_TOKEN: ${{ secrets.HF_ACCESS_TOKEN }}
HF_TOKEN: ${{ secrets.HF_TOKEN }}

- name: Test in browser
run: VCR_MODE=playback pnpm --filter ...[${{ steps.since.outputs.SINCE }}] test:browser
env:
HF_ACCESS_TOKEN: ${{ secrets.HF_ACCESS_TOKEN }}
HF_TOKEN: ${{ secrets.HF_TOKEN }}

- name: E2E - start mock npm registry
run: |
Expand Down Expand Up @@ -86,7 +86,7 @@ jobs:
pnpm i --ignore-workspace --registry http://localhost:4874/
pnpm start
env:
token: ${{ secrets.HF_ACCESS_TOKEN }}
token: ${{ secrets.HF_TOKEN }}

- name: E2E test - svelte app build
working-directory: e2e/svelte
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,9 +105,9 @@ Get your HF access token in your [account settings](https://huggingface.co/setti
```ts
import { HfInference } from "@huggingface/inference";

const HF_ACCESS_TOKEN = "hf_...";
const HF_TOKEN = "hf_...";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);

// You can also omit "model" to use the recommended model for the task
await inference.translation({
Expand Down Expand Up @@ -137,11 +137,11 @@ const { generated_text } = await gpt2.textGeneration({inputs: 'The answer to the
```ts
import {HfAgent, LLMFromHub, defaultTools} from '@huggingface/agents';

const HF_ACCESS_TOKEN = "hf_...";
const HF_TOKEN = "hf_...";

const agent = new HfAgent(
HF_ACCESS_TOKEN,
LLMFromHub(HF_ACCESS_TOKEN),
HF_TOKEN,
LLMFromHub(HF_TOKEN),
[...defaultTools]
);

Expand All @@ -162,16 +162,16 @@ console.log(messages);
```ts
import { createRepo, uploadFile, deleteFiles } from "@huggingface/hub";

const HF_ACCESS_TOKEN = "hf_...";
const HF_TOKEN = "hf_...";

await createRepo({
repo: "my-user/nlp-model", // or {type: "model", name: "my-user/nlp-test"},
credentials: {accessToken: HF_ACCESS_TOKEN}
credentials: {accessToken: HF_TOKEN}
});

await uploadFile({
repo: "my-user/nlp-model",
credentials: {accessToken: HF_ACCESS_TOKEN},
credentials: {accessToken: HF_TOKEN},
// Can work with native File in browsers
file: {
path: "pytorch_model.bin",
Expand All @@ -181,7 +181,7 @@ await uploadFile({

await deleteFiles({
repo: {type: "space", name: "my-user/my-space"}, // or "spaces/my-user/my-space"
credentials: {accessToken: HF_ACCESS_TOKEN},
credentials: {accessToken: HF_TOKEN},
paths: ["README.md", ".gitattributes"]
});
```
Expand Down
2 changes: 1 addition & 1 deletion packages/agents/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ const uppercaseTool: Tool = {
};

// pass it in the agent
const agent = new HfAgent(process.env.HF_ACCESS_TOKEN,
const agent = new HfAgent(process.env.HF_TOKEN,
LLMFromHub("hf_...", "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5"),
[uppercaseTool, ...defaultTools]);
```
Expand Down
18 changes: 9 additions & 9 deletions packages/agents/test/HfAgent.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,20 +4,20 @@ import type { Data } from "../src/types";
import type { HfInference } from "@huggingface/inference";

const env = import.meta.env;
if (!env.HF_ACCESS_TOKEN) {
console.warn("Set HF_ACCESS_TOKEN in the env to run the tests for better rate limits");
if (!env.HF_TOKEN) {
console.warn("Set HF_TOKEN in the env to run the tests for better rate limits");
}

describe("HfAgent", () => {
it("You can create an agent from the hub", async () => {
const llm = LLMFromHub(env.HF_ACCESS_TOKEN, "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5");
const agent = new HfAgent(env.HF_ACCESS_TOKEN, llm);
const llm = LLMFromHub(env.HF_TOKEN, "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5");
const agent = new HfAgent(env.HF_TOKEN, llm);
expect(agent).toBeDefined();
});

it("You can create an agent from an endpoint", async () => {
const llm = LLMFromEndpoint(env.HF_ACCESS_TOKEN ?? "", "endpoint");
const agent = new HfAgent(env.HF_ACCESS_TOKEN, llm);
const llm = LLMFromEndpoint(env.HF_TOKEN ?? "", "endpoint");
const agent = new HfAgent(env.HF_TOKEN, llm);
expect(agent).toBeDefined();
});

Expand All @@ -42,7 +42,7 @@ describe("HfAgent", () => {
},
};

const agent = new HfAgent(env.HF_ACCESS_TOKEN, undefined, [uppercaseTool, ...defaultTools]);
const agent = new HfAgent(env.HF_TOKEN, undefined, [uppercaseTool, ...defaultTools]);
const code = `
async function generate() {
const output = uppercase("hello friends");
Expand All @@ -61,7 +61,7 @@ async function generate() {
message(output);
}`;

const agent = new HfAgent(env.HF_ACCESS_TOKEN);
const agent = new HfAgent(env.HF_TOKEN);

await agent.evaluateCode(code).then((output) => {
expect(output.length).toBeGreaterThan(0);
Expand All @@ -75,7 +75,7 @@ async function generate() {
toolThatDoesntExist(aaa);
}`;

const hf = new HfAgent(env.HF_ACCESS_TOKEN);
const hf = new HfAgent(env.HF_TOKEN);

await hf.evaluateCode(code).then((output) => {
expect(output.length).toBeGreaterThan(0);
Expand Down
2 changes: 1 addition & 1 deletion packages/inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -504,7 +504,7 @@ const { generated_text } = await gpt2.textGeneration({inputs: 'The answer to the
## Running tests

```console
HF_ACCESS_TOKEN="your access token" pnpm run test
HF_TOKEN="your access token" pnpm run test
```

## Finding appropriate models
Expand Down
6 changes: 3 additions & 3 deletions packages/inference/test/HfInference.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ import { readTestFile } from "./test-files";
const TIMEOUT = 60000 * 3;
const env = import.meta.env;

if (!env.HF_ACCESS_TOKEN) {
console.warn("Set HF_ACCESS_TOKEN in the env to run the tests for better rate limits");
if (!env.HF_TOKEN) {
console.warn("Set HF_TOKEN in the env to run the tests for better rate limits");
}

describe.concurrent(
"HfInference",
() => {
// Individual tests can be ran without providing an api key, however running all tests without an api key will result in rate limiting error.
const hf = new HfInference(env.HF_ACCESS_TOKEN);
const hf = new HfInference(env.HF_TOKEN);

it("throws error if model does not exist", () => {
expect(
Expand Down
2 changes: 1 addition & 1 deletion packages/inference/test/vcr.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ if (env.VCR_MODE) {

VCR_MODE = env.VCR_MODE as MODE;
} else {
VCR_MODE = env.HF_ACCESS_TOKEN ? MODE.DISABLED : MODE.PLAYBACK;
VCR_MODE = env.HF_TOKEN ? MODE.DISABLED : MODE.PLAYBACK;
}

const originalFetch = globalThis.fetch;
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/audio-classification/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.audioClassification({
data: await (await fetch("sample.flac")).blob(),
model: "facebook/mms-lid-126",
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/audio-to-audio/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.audioToAudio({
data: await (await fetch("sample.flac")).blob(),
model: "speechbrain/sepformer-wham",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to t
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.automaticSpeechRecognition({
data: await (await fetch("sample.flac")).blob(),
model: "openai/whisper-large-v2",
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/conversational/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.conversational({
model: "facebook/blenderbot-400M-distill",
inputs: "Going to the movies tonight - any suggestions?",
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/image-classification/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to c
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.imageClassification({
data: await (await fetch("https://picsum.photos/300/300")).blob(),
model: "microsoft/resnet-50",
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/image-segmentation/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.imageSegmentation({
data: await (await fetch("https://picsum.photos/300/300")).blob(),
model: "facebook/detr-resnet-50-panoptic",
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/image-to-image/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.imageToImage({
data: await (await fetch("image")).blob(),
model: "timbrooks/instruct-pix2pix",
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/image-to-text/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.imageToText({
data: await (await fetch("https://picsum.photos/300/300")).blob(),
model: "Salesforce/blip-image-captioning-base",
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/summarization/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
const inputs =
"Paris is the capital and most populous city of France, with an estimated population of 2,175,601 residents as of 2018, in an area of more than 105 square kilometres (41 square miles). The City of Paris is the centre and seat of government of the region and province of Île-de-France, or Paris Region, which has an estimated population of 12,174,880, or about 18 percent of the population of France as of 2017.";

Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/text-classification/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.conversational({
model: "distilbert-base-uncased-finetuned-sst-2-english",
inputs: "I love this movie!",
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/text-generation/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.conversational({
model: "distilbert-base-uncased-finetuned-sst-2-english",
inputs: "I love this movie!",
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/text-to-image/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.textToImage({
model: "stabilityai/stable-diffusion-2",
inputs: "award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]",
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/text-to-speech/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.textToSpeech({
model: "facebook/mms-tts",
inputs: "text to generate speech from",
Expand Down
2 changes: 1 addition & 1 deletion packages/tasks/src/tasks/translation/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ You can use [huggingface.js](https://github.com/huggingface/huggingface.js) to i
```javascript
import { HfInference } from "@huggingface/inference";

const inference = new HfInference(HF_ACCESS_TOKEN);
const inference = new HfInference(HF_TOKEN);
await inference.translation({
model: "t5-base",
inputs: "My name is Wolfgang and I live in Berlin",
Expand Down
Loading