Knowledge Distillation Plan

Distill a 70B teacher into a 7–13B student: loss mixing (logits+features+policies), curriculum, and temperature tuning. Provide downstream eval deltas.

Heading:

Author: Assistant

Model: gpt-4o

Category: model-compression-training

Tags: LLM, distillation, teacher-student, curriculum, losses


Ratings

Average Rating: 0

Total Ratings: 0

Submit Your Rating:

Prompt ID:
69441635d6e412844b02a2cb

Average Rating: 0

Total Ratings: 0


Share with Facebook
Share with X
Share with LINE
Share with WhatsApp
Try it out on ChatGPT
Try it out on Perplexity
Copy Prompt and Open Claude
Copy Prompt and Open Sora
Evaluate Prompt
Organize and Improve Prompts with Curio AI Brain