HyperAIHyperAI

Command Palette

Search for a command to run...

Firefly Chinese Llama2 Incremental pre-training Dataset

The dataset is Firefly-LLaMA2-Chinese project The incremental pre-training data totals about 22GB of text, mainly including open source data sets such as CLUE, ThucNews, CNews, COIG, Wikipedia, and ancient poems, prose, classical Chinese, etc. collected by the research team. The data distribution is shown in the figure below.

firefly-pretrain-dataset.torrent
Seeding 2Downloading 0Completed 160Total Downloads 245
  • firefly-pretrain-dataset/
    • README.md
      1.04 KB
    • README.txt
      2.09 KB
      • data/
        • firefly-pretrain-dataset.zip
          9.02 GB

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Firefly Chinese Llama2 Incremental pre-training Dataset | Datasets | HyperAI