I’m encountering a TypeScript error while trying to initialize a LLaMA model with quantization enabled using Transformers.js. The compiler is throwing an error indicating that the ‘quantized’ property isn’t recognized in the Pipeline configuration.
How can I properly configure quantization for the Pipeline while satisfying TypeScript’s type checking? Is there an alternative way to enable quantization for the LLaMA model in Transformers.js?
Would appreciate any guidance on resolving this typing issue while maintaining model quantization functionality.
Object literal may only specify known properties, and 'quantized' does not exist in type '{
task?: string | undefined;
model?: PreTrainedModel | undefined;
tokenizer?: PreTrainedTokenizer | undefined;
processor?: Processor | undefined;
}'
Environment Details
- Framework: Transformers.js (v2.17.2)
- Model: Xenova/LLaMA-70b
- TypeScript Version: (v5.7.2)
- Node.js Version: [v22.12.0]
I have tried Explicitly declaring the quantized property as true
class ModelWorker {
private model: Pipeline | null = null;
async initializeModel() {
try {
const model = await AutoModelForCausalLM.from_pretrained('Xenova/LLaMA-70b');
this.model = new Pipeline({
task: 'text-generation',
model: model,
quantized: true
});
} catch (error) {
console.error('Error initializing model:', error);
throw error;
}
}
}
USING (tramsformers.js)
Drago is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.