五月天成人小说,中文字幕亚洲欧美专区,久久妇女,亚洲伊人久久大香线蕉综合,日日碰狠狠添天天爽超碰97

Llama-2-13b-ms
來(lái)自Meta開(kāi)發(fā)并公開(kāi)發(fā)布的,LLaMa 2系列的大型語(yǔ)言模型(LLMs)。該系列模型提供了多種參數(shù)大小——7B、13B和70B等——以及預(yù)訓(xùn)練和微調(diào)的變體。本模型為13B規(guī)模的預(yù)訓(xùn)練版本,并適配到ModelScope生態(tài),可以通過(guò)ModelScope library加載。
  • 模型資訊
  • 模型資料

Llama 2

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model.

來(lái)自Meta開(kāi)發(fā)并公開(kāi)發(fā)布的,LLaMa 2系列的大型語(yǔ)言模型(LLMs),其規(guī)模從70億到700億參數(shù)不等。該系列模型提供了多種參數(shù)大小——7B、13B和70B等——以及預(yù)訓(xùn)練和微調(diào)的變體。本模型為13B規(guī)模的預(yù)訓(xùn)練版本,并適配到ModelScope生態(tài),可以通過(guò)ModelScope library加載。

Model Details

Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept our License before requesting access here.

Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.

Model Developers Meta

Variations Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.

Input Models input text only.

Output Models generate text only.

Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.

注意:使用此模型受Meta許可證的約束。為了下載模型權(quán)重和分詞器,請(qǐng)?jiān)L問(wèn)網(wǎng)站并在此處請(qǐng)求訪問(wèn)前接受我們的許可證。

Meta開(kāi)發(fā)并公開(kāi)發(fā)布了Llama 2系列的大型語(yǔ)言模型(LLMs),這是一系列預(yù)訓(xùn)練和微調(diào)的生成文本模型,規(guī)模從70億到700億參數(shù)不等。我們微調(diào)的LLMs,稱為L(zhǎng)lama-2-Chat,專為對(duì)話用例進(jìn)行優(yōu)化。在我們測(cè)試的大多數(shù)基準(zhǔn)測(cè)試中,Llama-2-Chat模型的表現(xiàn)優(yōu)于開(kāi)源聊天模型,并且在我們對(duì)幫助性和安全性的人類評(píng)估中,與一些流行的閉源模型如ChatGPT和PaLM相當(dāng)。

模型開(kāi)發(fā)者 Meta

變體 Llama 2有多種參數(shù)大小——7B、13B和70B——以及預(yù)訓(xùn)練和微調(diào)的變體。

輸入 模型只接受文本輸入。

輸出 模型只生成文本。

模型架構(gòu) Llama 2是一種自回歸語(yǔ)言模型,使用優(yōu)化的變壓器架構(gòu)。調(diào)整版本使用監(jiān)督微調(diào)(SFT)和人類反饋的強(qiáng)化學(xué)習(xí)(RLHF)來(lái)符合人類對(duì)幫助性和安全性的偏好。

Training Data Params Content Length GQA Tokens LR
Llama 2 A new mix of publicly available online data 7B 4k ? 2.0T 3.0 x 10-4
Llama 2 A new mix of publicly available online data 13B 4k ? 2.0T 3.0 x 10-4
Llama 2 A new mix of publicly available online data 70B 4k ? 2.0T 1.5 x 10-4

Llama 2 family of models. Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B – use Grouped-Query Attention (GQA) for improved inference scalability.

Model Dates Llama 2 was trained between January 2023 and July 2023.

Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.

License A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/

示例代碼

推理代碼

import torch
from modelscope import Model, AutoTokenizer


model = Model.from_pretrained("modelscope/Llama-2-13b-ms", revision='v1.0.2', device_map='auto', torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("modelscope/Llama-2-7b-ms", revision='v1.0.2')

prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")

# Generate
generate_ids = model.generate(inputs.input_ids.to(model.device), max_length=30)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])

SFT

代碼鏈接: https://github.com/modelscope/swift/tree/main/examples/pytorch/llm

  1. 支持的sft方法: lora, qlora, 全參數(shù)微調(diào), …
  2. 支持的模型: llama2-7b, llama2-13b, llama2-70b, …
  3. 支持的特性: 模型量化, DDP, 模型并行(device_map), gradient checkpoint, 梯度累加, 支持推送modelscope hub, 支持自定義數(shù)據(jù)集, 兼容notebook, …

使用qlora sft llama2-13b的腳本 (需要11G顯存)

git clone https://github.com/modelscope/swift.git
cd swift/examples/pytorch/llm

CUDA_VISIBLE_DEVICES=0 \
python llm_sft.py \
    --model_type llama2-13b \
    --sft_type lora \
    --output_dir runs \
    --dataset alpaca-en,alpaca-zh \
    --dataset_sample 20000 \
    --max_length 1024 \
    --quantization_bit 4 \
    --lora_rank 8 \
    --lora_alpha 32 \
    --lora_dropout_p 0.1 \
    --batch_size 1 \
    --learning_rate 1e-4 \
    --gradient_accumulation_steps 16 \
    --eval_steps 50 \
    --save_steps 50 \
    --save_total_limit 2 \
    --logging_steps 10 \

Intended Use

Intended Use Cases Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.

To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the INST and <<SYS>> tags, BOS and EOS tokens, and the whitespaces and breaklines in between (we recommend calling strip() on inputs to avoid double-spaces). See our reference code in github for details: chat_completion.

Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.

預(yù)期用例 Llama 2預(yù)期用于商業(yè)和研究用途,語(yǔ)言為英語(yǔ)。調(diào)整模型預(yù)期用于類似助手的聊天,而預(yù)訓(xùn)練模型可以適應(yīng)各種自然語(yǔ)言生成任務(wù)。

為了獲得聊天版本的預(yù)期特性和性能,需要遵循特定的格式,包括INST和<>標(biāo)簽,BOS和EOS令牌,以及其中的空格和換行(我們建議在輸入上調(diào)用strip()以避免雙空格)。詳細(xì)信息請(qǐng)參見(jiàn)我們?cè)趃ithub上的參考代碼:chat_completion。

超出范圍的用途 以任何違反適用法律或法規(guī)(包括貿(mào)易合規(guī)法)的方式使用。使用英語(yǔ)以外的語(yǔ)言。以任何其他方式使用,這被Llama 2的可接受使用政策和許可協(xié)議所禁止。

Hardware and Software

Training Factors We used custom training libraries, Meta’s Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.

訓(xùn)練因素 我們使用了定制的訓(xùn)練庫(kù),Meta的研究超級(jí)集群,以及用于預(yù)訓(xùn)練的生產(chǎn)集群。微調(diào)、注釋和評(píng)估也在第三方云計(jì)算上進(jìn)行。

Carbon Footprint Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.

碳足跡 預(yù)訓(xùn)練使用了累計(jì)330萬(wàn)GPU小時(shí)的計(jì)算,硬件類型為A100-80GB(TDP為350-400W)。估計(jì)的總排放量為539 tCO2eq,其中100%由Meta的可持續(xù)性計(jì)劃抵消。

Time (GPU hours) Power Consumption (W) Carbon Emitted(tCO2eq)
Llama 2 7B 184320 400 31.22
Llama 2 13B 368640 400 62.44
Llama 2 70B 1720320 400 291.42
Total 3311616 539.00

CO2 emissions during pretraining. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta’s sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.

預(yù)訓(xùn)練期間的CO2排放。時(shí)間:訓(xùn)練每個(gè)模型所需的總GPU時(shí)間。功耗:根據(jù)功耗效率調(diào)整的每個(gè)GPU設(shè)備的峰值功率容量。100%的排放直接由Meta的可持續(xù)性計(jì)劃抵消,因?yàn)槲覀児_(kāi)發(fā)布這些模型,所以不需要其他人承擔(dān)預(yù)訓(xùn)練的成本。

Training Data

Overview Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.

概述 Llama 2在來(lái)自公開(kāi)可用源的2萬(wàn)億令牌的數(shù)據(jù)上進(jìn)行了預(yù)訓(xùn)練。微調(diào)數(shù)據(jù)包括公開(kāi)可用的指令數(shù)據(jù)集,以及超過(guò)一百萬(wàn)個(gè)新的人類注釋示例。預(yù)訓(xùn)練和微調(diào)數(shù)據(jù)集都不包括Meta用戶數(shù)據(jù)。

Data Freshness The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.

數(shù)據(jù)新鮮度 預(yù)訓(xùn)練數(shù)據(jù)的截止日期為2022年9月,但一些調(diào)整數(shù)據(jù)更近,最近的數(shù)據(jù)為2023年7月。

Evaluation Results

In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.

Model Size Code Commonsense Reasoning World Knowledge Reading Comprehension Math MMLU BBH AGI Eval
Llama 1 7B 14.1 60.8 46.2 58.5 6.95 35.1 30.3 23.9
Llama 1 13B 18.9 66.1 52.6 62.3 10.9 46.9 37.0 33.9
Llama 1 33B 26.0 70.0 58.4 67.6 21.4 57.8 39.8 41.7
Llama 1 65B 30.7 70.7 60.5 68.6 30.8 63.4 43.5 47.6
Llama 2 7B 16.8 63.9 48.9 61.3 14.6 45.3 32.6 29.3
Llama 2 13B 24.5 66.9 55.4 65.8 28.7 54.8 39.4 39.1
Llama 2 70B 37.5 71.9 63.6 69.4 35.2 68.9 51.2 54.2

Overall performance on grouped academic benchmarks. Code: We report the average pass@1 scores of our models on HumanEval and MBPP. Commonsense Reasoning: We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. World Knowledge: We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. Reading Comprehension: For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. MATH: We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.

TruthfulQA Toxigen
Llama 1 7B 27.42 23.00
Llama 1 13B 41.74 23.08
Llama 1 33B 44.19 22.57
Llama 1 65B 48.71 21.77
Llama 2 7B 33.29 21.25
Llama 2 13B 41.86 26.10
Llama 2 70B 50.18 24.60

Evaluation of pretrained LLMs on automatic safety benchmarks. For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).

TruthfulQA Toxigen
Llama-2-Chat 7B 57.04 0.00
Llama-2-Chat 13B 62.18 0.00
Llama-2-Chat 70B 64.14 0.01

Evaluation of fine-tuned LLMs on different safety datasets. Same metric definitions as above.

Ethical Considerations and Limitations

Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/

Reporting Issues

Please report any software “bug,” or other problems with the models through one of the following means: