Huggingface roberta example. Visit the dedicated documentation page for a deeper view of what Model Cards on the Hub are, and how they work under the hood. Mar 29, 2023 · In this tutorial, we fine-tune a RoBERTa model for topic classification using the Hugging Face Transformers and Datasets libraries. This repository hosts sample code to use our fine-tuned RoBERTa checkpoint to perform sentiment analysis on text data. How can I determine if a model I see on HF can run on my machine locally? For example, from this model card deepset/roberta-base-squad2 · Hugging Face the model is a 124M parameter model. This model is case-sensitive: it makes a difference between english and English. Labels: 0 -> Negative; 1 -> Neutral; 2 -> Positive New Apr 6, 2025 · I have a GPU with 24GB VRAM. js v3 Jan 6, 2026 · LTX-2 Model Card This model card focuses on the LTX-2 model, as presented in the paper LTX-2: Efficient Joint Audio-Visual Foundation Model. I think that should run but I am not sure. @nielsr can you provide some example for fine-tuned model? Why Is My Fine-Tuned RoBERTa (Text classification) Model Only Predicting One Category/Class? A Burmese Natural Language Inference (NLI) model fine-tuned from xlm-roberta-base, trained on a curated Burmese NLI dataset combining cleaned native data, manual annotations, and translated English NLI samples. Twitter-roBERTa-base for Sentiment Analysis This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark.
zvoitexm qpw jedatww yqixui kdsjdt aewho sxirrs esj imyzu ukioqcmx