Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RunTimeError: Error(s) in loading state_dict for RobertaForSequenceClassification #28

Open
greenunknown opened this issue Feb 18, 2021 · 10 comments

Comments

@greenunknown
Copy link

I get the following error after trying to run

pip install -r requirements.txt
python -m detector.server detector-base.pt

Error:

RuntimeError: Error(s) in loading state_dict for RobertaForSequenceClassification:
        Missing key(s) in state_dict: "roberta.embeddings.position_ids".
        Unexpected key(s) in state_dict: "roberta.pooler.dense.weight", "roberta.pooler.dense.bias"

I'm not sure if there was a change between the google version of the Roberta weights or the azure version of the weights.

Thanks for the help!

@jongwook
Copy link
Collaborator

Hi, it's our bad that we didn't properly specify the dependency versions. Could you try with transformer==2.9.1 and see if that loads properly?

@greenunknown
Copy link
Author

Thank you @jongwook ! Changing, in the requirements.txt, the transformers>=2.0.0 to transformers==2.9.1 resolved the issue.

@BelowzeroA
Copy link

Following your advice I installed transformers==2.9.1 but after that the next issue popped up:

from transformers import RobertaForSequenceClassification, RobertaTokenizer
ImportError: cannot import name 'RobertaForSequenceClassification' from 'transformers'

@crazoter
Copy link

crazoter commented May 24, 2021

@BelowzeroA Did you manage to resolve this? The fix worked for me on Google Colab, but I encountered your issue when I tried the same fix on a fresh anaconda environment on a Windows 10 device.

Edit: scratch that, I had a typo in the component name. Fixing that resolved my problem.

@Yorko
Copy link

Yorko commented May 18, 2022

Made it work with Python 3.8, transformers 2.9.1, and tokenizers 0.7.0.

FYI I'm using poetry and had to run the following:

  • poetry env use <PATH_TO_python3.8> (can be looked up running which python3.8)
  • poetry install
  • poetry run python -m detector.server <PATH_TO_CKPT_detector-base.pt>

My pyproject.toml file is the following:

[tool.poetry]
name = "gpt-2-output-dataset"
version = "0.1.0"
description = "GPT-2 output detector"
authors = ["Your Name <[email protected]>"]

[tool.poetry.dependencies]
python = "^3.8.0"
transformers = "2.9.1"
fire = "^0.2.1"
requests = "^2.22.0"
tqdm = "^4.32.2"
torch = "^1.2.0"
tokenizers = "^0.7.0"
tensorboard = "^1.14.0"

[tool.poetry.dev-dependencies]

[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"

@ilovefreesw
Copy link

Facing the same thing today.
image

@casheo
Copy link

casheo commented Dec 20, 2022

Same thing today
image

With transformer 2.9.1.

@NeyokiCat
Copy link

Python3.10.10; transformers==4.26.1; tokenizers==0.13.2
Same with me:
微信图片_20230304195849

@niranjanakella
Copy link

I am also facing the same issue can someone please kindly address this.

@AshfakYeafi
Copy link

transformers==4.24.0 and tokenizers==0.13.2 solves the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants