SPT-BASE

SPT-BASE

Skeptical Pretrained Transformer – Base

Uncompromising & unfiltered

Model Information

Version: v1.0

Model Type: Causal-LM, 8-bit quantized + LoRA adapter

Architecture: SPT (Skeptical Pretrained Transformer)

Creator: Dr. Harsh Vardhan Chopra

Organization: Gorq AI Platforms

Context Window: Up to 128K tokens (131,072)

License: Gorq-SPT-License (See details below)

πŸ“– Overview

SPT-BASE is the inaugural model in the Skeptical Pretrained Transformer (SPT) series, meticulously built from scratch by Gorq AI Platforms. This model represents a significant step in developing sovereign AI capabilities with a unique approach to reasoning and response generation.

SPT-BASE underwent a full end-to-end training regime utilizing proprietary datasets and a novel multi-stage reasoning pipeline designed to enhance robustness, self-critique, and the generation of polished, well-considered outputs. The internal pipeline is conceptualized as:

[Input] β†’ … (base reasoning) 
        β†’ … (deep analysis) 
        β†’ … (first draft) 
        β†’ … (self-critique) 
        β†’ … (revised draft) 
        β†’ … (skeptical check) 
        β†’ … (polished answer)
        

Skepticism Notice:

Some may assume SPT-BASE was derived or β€œtuned” from existing models like Qwen2.5. Rest assured, SPT-BASE was trained from its foundations by Gorq AI Platforms, though it naturally employs similar transformer building blocks common in modern AI architectures. Our commitment is to genuine, from-scratch development for sovereign capabilities.

πŸš€ Key Features

Feature Description
Truly From Scratch No downstream tuning on any external pre-trained checkpoints. Full end-to-end training by Gorq AI.
Multi-Stage Reasoning Internal – pipeline for robust, self-critiqued, and refined answers.
8-bit Quantization + LoRA Memory-efficient fine-tuning adapters available in 8-bit precision for consumer GPU accessibility.
Massive Context Window Supports up to 131,072 tokens, enabling comprehensive understanding of long-form documents and extended dialogues.
Ethical, Free & Nonprofit Developed under Gorq AI Platforms’ nonprofit ethosβ€”always free for public research and non-commercial use as per license.
Indian Cultural Alignment

πŸ§ͺ Training Details

  • Base Architecture: Transformer-based causal language model with approximately 3.09 Billion parameters, trained by Gorq AI Platforms.
  • Quantization & Adapters: Loaded in 8-bit precision via `bitsandbytes`. Includes PEFT LoRA adapters (rank 8, Ξ± = 16, dropout 0.05) for efficient fine-tuning.
  • Optimization: Utilized AdamW optimizer, FP16 mixed precision, gradient checkpointing, and gradient accumulation (1 Γ— 8 steps) to manage training with large context windows on available hardware.
  • Data: Trained on a custom-curated, multi-stage JSONL pipeline dataset. This dataset covers prompts, deep analysis stages, self-critique loops, and final polished outputs, reflecting the model’s internal reasoning process.
  • Hardware: Initial training conducted on Google Colab GPU infrastructure (NVIDIA T4) over 3 epochs with specified accumulation steps.

πŸ“₯ Usage

To use SPT-BASE with the Hugging Face Transformers library:


from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Gorq-AI-Platforms/SPT-BASE")
model     = AutoModelForCausalLM.from_pretrained("Gorq-AI-Platforms/SPT-BASE")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

# Standard inference
print(pipe("Who are you?")[0]["generated_text"])

# Debug mode (to see all internal reasoning stages)
# Prepend "DEBUG:" to your prompt
print(pipe("DEBUG: Explain your reasoning pipeline")[0]["generated_text"])

        

Note: Ensure you have the necessary libraries (`transformers`, `torch`, `bitsandbytes`, `peft`, `accelerate`) installed.

βš–οΈ License & Usage Terms

SPT-BASE is released under the **Gorq-SPT-License**.

Initial Statement: You may use SPT-BASE for **research, education, and nonprofit purposes only**. Modification or redistribution use without written consent is strictly prohibited.

Subsequent Statement: This model is distributed under the Gorq SPT License, which permits **commercial, non-modifiable use only**. No modifications, redistribution, or rebranding allowed without prior written consent from Gorq AI Platforms. Any unauthorized change to the model or its metadata violates the license and terminates usage rights.

All references to “SPT-BASE”, “Gorq AI Platforms”, and “Harsh Vardhan Chopra” must remain intact in all applications and derivative works where permitted.

For redistribution, specific commercial use cases beyond simple inference, or any architectural modifications, a **No Objection Certificate (NOC)** must be requested from Gorq AI Platforms.

Please refer to the full legal text in the LICENSE.md document for complete terms and conditions.

All disputes related to this model or its license fall under the exclusive jurisdiction of courts in Uttar Pradesh, India.

🌐 References & Acknowledgments

  • Gorq AI Platforms: For its commitment to democratizing sovereign, ethical AI for all.
  • Dr. Harsh Vardhan Chopra: Visionary technologist and founder of Gorq AI Platforms.
  • Open-Source Community: Gratitude to the developers of `bitsandbytes`, `PEFT`, and `Hugging Face Transformers` for their invaluable open-source tools that make projects like SPT-BASE possible.
β€œSkepticism sharpens truth.” β€” Dr. Harsh Vardhan Chopra

πŸ“Œ How to Cite

If you use SPT-BASE in your research or publications, please cite it as follows:

Harsh Vardhan Chopra. (2025). SPT-BASE: Skepticised Pre-Trained Transformer - Base. Gorq AI Platforms. 
Retrieved from https://huggingface.co/Gorq-AI-Platforms/SPT-BASE (or primary model source)

Creator: Harsh Vardhan Chopra
Organization: Gorq AI Platforms
Model Name: SPT-BASE
Source: https://huggingface.co/Gorq-AI-Platforms/SPT-BASE
        

πŸ“¬ Contact & NOC Requests

To request permission for modifications, redistribution, rebranding, or specific commercial uses not covered by the standard license terms, please submit a No Objection Certificate (NOC) request:

Your request should include:

  • Your name and affiliation/organization.
  • Detailed purpose and scope of your intended use.
  • A clear commitment to retain all Gorq AI Platforms branding, attributions, and ethical alignment principles.

A template for NOC requests can be found here: NOC-REQUEST-TEMPLATE.md

πŸ” Encryption & SHA Hashes

Verify the integrity of downloaded model files using these SHA256 hashes:

  • Binary (`.bin`): b9aeb0794e7a246ec368e477b16ca08093272ee8ab0c932d69a725d7d570c014
  • Safetensor (`.safetensors`): d4930af7904c9deed9656d158e2adae3a3c681881fa025cfcfa9ae70544df628
  • Tokenizer Files (Combined/Representative): 74c2913b463e405c1e153ec0e75813f1a0c5fa5bf254ad7ab3eba503978da688

πŸ‘¨πŸ»β€πŸ’» Team & Leadership Behind SPT-BASE

(Dr.) Harsh Vardhan Chopra

Founder of Gorq AI Platforms

Instagram | Twitter | LinkedIn | Website

Ayush Mishra

CMO of Gorq AI Platforms and Co-Founded GFT-ALPHA

Instagram

Mehtab Hassan

Advisor and Supporter of Gorq AI Platforms

Website

Β© 2025 Harsh Vardhan Chopra – Gorq AI Platforms – All rights reserved.

❗ IMPORTANT NOTICE: This model is distributed under the Gorq SPT License. No modifications, redistribution, or rebranding are allowed without prior written consent from Gorq AI Platforms. Any unauthorized change to the model or its metadata violates the license and immediately terminates all usage rights.

All references to “SPT-BASE”, “Gorq AI Platforms”, and “Harsh Vardhan Chopra” must remain intact in all applications and derivative works.