openledger
  • Openledger Ecosystem
    • What is Openledger?
    • Test Network Overview
      • Block Production and Execution
        • Data Availability via EigenDA
        • Bridging and Settlement
    • Data Intelligence Layer
  • Testnet
    • Node Installation
      • Android
      • Windows
      • Chrome Extension
      • Linux (Ubuntu)
    • Earning Mechanism
      • Network Earnings
      • Referral Earnings
      • Tier System and Progression
  • Epoch 2
  • DATANETS AND PROOF OF ATTRIBUTION
    • What is Datanets?
    • Why Specialized Data is important?
    • Proof of Attribution
    • OpenLedger Data Attribution Pipeline
    • RAG Attribution
  • Token
    • Openledger Token
  • Model Factory
    • ModelFactory: Where AI Meets Secure Data Fine-Tuning
    • Core Concepts
    • Supported Models
    • System Architecture
    • Key Features
    • Benchmarks
  • OpenLora
    • Open LoRA: A Scalable Fine-Tuned Model Serving Framework
    • System Architecture
    • Workflow
    • Optimizations & Performance Enhancements
    • Use Cases
    • API & Integration
    • The Future
  • Community Support
    • Openledger communities
Powered by GitBook
On this page
  1. Model Factory

Core Concepts

PreviousModelFactory: Where AI Meets Secure Data Fine-TuningNextSupported Models

Last updated 3 months ago

How ModelFactory Works

ModelFactory integrates dataset access control and model fine-tuning into a seamless workflow, ensuring data security and ownership integrity. The platform’s primary processes include:

Dataset Request & Access

  • Users submit dataset requests through OpenLedger’s repository.

  • Providers approve or deny access based on established policies.

  • Approved datasets are automatically linked to the user’s ModelFactory interface.

Model Selection & Configuration

  • Users select from a wide range of LLMs (e.g., LLaMA, Mistral, DeepSeek).

  • Hyperparameters like learning rate, batch size, and epochs are configured through the GUI.

Fine-Tuning Process

  • The fine-tuning engine supports methods such as LoRA and QLoRA.

  • Real-time dashboards provide training progress insights.

Evaluation & Deployment

  • Built-in evaluation tools help analyze model performance.

  • Exported models can be deployed in user applications or shared within the ecosystem.

Chat Interface

  • Enables users to interact with fine-tuned models directly through the GUI.

  • Supports real-time question-and-answer sessions or task-specific interactions.

RAG Attribution

  • Provides retrieval-augmented generation (RAG) capabilities.

  • When users ask a question, the model responds with generated outputs alongside sources of citations.

  • Ensures proper attribution of information and enhances trust in generated outputs.