五月天成人小说,中文字幕亚洲欧美专区,久久妇女,亚洲伊人久久大香线蕉综合,日日碰狠狠添天天爽超碰97

姜子牙-LLaMA-13B-1v1
我們對Ziya-LLaMA-13B-v1模型進行繼續(xù)優(yōu)化,推出開源版本Ziya-LLaMA-13B-v1.1。通過調(diào)整微調(diào)數(shù)據(jù)的比例和采用更優(yōu)的強化學習策略,本版本在問答準確性、數(shù)學能力以及安全性等方面得到了明顯提升
  • 模型資訊
  • 模型資料

Ziya-LLaMA-13B-v1.1

(LLaMA權(quán)重的許可證限制,我們無法直接發(fā)布完整的模型權(quán)重,用戶需要參考使用說明進行合并)

姜子牙系列模型

簡介 Brief Introduction

我們對Ziya-LLaMA-13B-v1模型進行繼續(xù)優(yōu)化,推出開源版本Ziya-LLaMA-13B-v1.1。通過調(diào)整微調(diào)數(shù)據(jù)的比例和采用更優(yōu)的強化學習策略,本版本在問答準確性、數(shù)學能力以及安全性等方面得到了明顯提升,詳細能力分析如下圖所示。

We have further optimized the Ziya-LLaMA-13B-v1 model and released the open-source version Ziya-LLaMA-13B-v1.1. By adjusting the proportion of fine-tuning data and adopting a better reinforcement learning strategy, this version has achieved significant improvements in question-answering accuracy, mathematical ability, and safety, as shown in the following figure in detail.

軟件依賴

pip install torch==1.12.1 tokenizers==0.13.3 git+https://github.com/huggingface/transformers

使用 Usage

請參考Ziya-LLaMA-13B-v1的使用說明。

Please refer to the usage for Ziya-LLaMA-13B-v1.

引用 Citation

如果您在您的工作中使用了我們的模型,可以引用我們的論文

If you are using the resource for your work, please cite the our paper:

@article{fengshenbang,
  author    = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
  title     = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
  journal   = {CoRR},
  volume    = {abs/2209.02970},
  year      = {2022}
}

You can also cite our website:

歡迎引用我們的網(wǎng)站:

@misc{Fengshenbang-LM,
  title={Fengshenbang-LM},
  author={IDEA-CCNL},
  year={2021},
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}