M1芯片Mac加载模型时报错

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

通过mps后端加载
tokenizer = AutoTokenizer.from_pretrained(“/本地路径示例/chatglm-6b”, trust_remote_code=True)
model = AutoModel.from_pretrained(“/本地路径示例/chatglm-6b”, trust_remote_code=True).half().to(‘mps’)

提示:UserWarning: MPS: no support for int64 min/max ops, casting it to int32
然后系统就一直卡住了,没有输出
image

已安装mps
image

Expected Behavior

No response

Steps To Reproduce

111

Environment

- OS:MacOS Ventura 
- Python:3.10 
- Transformers: 
- PyTorch: 
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :False

Anything else?

我只在 M2 芯片上测试过。看截图是在运行 api.py ,你可以先尝试运行 cli_demo.py 。如果不行的话再把 pytorch-nightly 升级到最新版本?