AI Prompting
A guide to leveraging AI tools effectively in your coding workflow.
This guide helps developers leverage AI tools effectively in their coding workflow. Whether you’re using Cursor, GitHub Copilot, or other AI assistants, these strategies will help you get better results and integrate AI smoothly into your development process.
Understanding Context Windows
Why Context Matters
AI coding assistants have what’s called a “context window” - the amount of text they can “see” and consider when generating responses. Think of it as the AI’s working memory:
- Most modern AI assistants can process thousands of tokens (roughly 4-5 words per token)
- Everything you share and everything the AI responds with consumes this limited space
- Once the context window fills up, parts of your conversational history may be lost.
This is why providing relevant context upfront is crucial - the AI can only work with what it can “see” in its current context window.
Optimizing for Context Windows
To get the most out of AI assistants:
- Prioritize relevant information: Focus on sharing the most important details first
- Remove unnecessary content: Avoid pasting irrelevant code or documentation
- Structure your requests: Use clear sections and formatting to make information easy to process
- Reference external resources: For large codebases, consider sharing only the most relevant files
- For larger projects, create and reference a central documentation file that summarizes key information, rather than repeatedly explaining the same context.
Setting Up AI Tools
Configuring Cursor Rules
Cursor Rules allow you to provide consistent context to Cursor AI, making it more effective at understanding your codebase and providing relevant suggestions.
Creating Cursor Rules
- Open the Command Palette in Cursor:
- Mac: Cmd + Shift + P
- Windows/Linux: Ctrl + Shift + P
-
Search for “Cursor Rules” and select the option to create or edit rules
-
Add project-specific rules that help Cursor understand your project:
- Next.js
- Astro
- Vite
- Save your rules file and Cursor will apply these rules to its AI suggestions
Setting Up an OnchainKit Project
To create a new OnchainKit project:
After creating your project, prompt to generate comprehensive documentation for your new OnchainKit project.
Creating Project Documentation
A comprehensive instructions file helps AI tools understand your project better. This should be created early in your project and updated regularly.
Ready-to-Use Prompt for Creating Instructions.md:
Effective Prompting Strategies
Be Specific and Direct Start with clear commands and be specific about what you want. AI tools respond best to clear, direct instructions. Example: ❌ “Help me with my code” ✅ “Refactor this authentication function to use async/await instead of nested then() calls”
Provide Context for Complex Tasks
Ready-to-Use Prompt:
Ask for Iterations
Start simple and refine through iterations rather than trying to get everything perfect in one go.
Ready-to-Use Prompt:
Working with OnchainKit
Leveraging LLMs.txt for Documentation
The OnchainKit project provides optimized documentation in the form of LLMs.txt files. These files are specifically formatted to be consumed by AI models:
- Use OnchainKit Documentation 2.Find the component you want to implement 3.Copy the corresponding LLMs.txt url 4.Paste it into your prompt to provide context
Example LLMs.txt Usage:
Component Integration Example
Ready-to-Use Prompt for Token Balance Display:
Debugging with AI
Effective Debugging Prompts
Ready-to-Use Prompt for Bug Analysis:
Ready-to-Use Prompt for Adding Debug Logs:
When You’re Stuck
If you’re uncertain how to proceed:
Ready-to-Use Clarification Prompt:
If you’re unsure about something, simply state it clearly:
Advanced Prompting Techniques
Modern AI assistants have capabilities that you can leverage with these advanced techniques:
1. Step-by-step reasoning: Ask the AI to work through problems systematically
2. Format specification: Request specific formats for clarity
3. Length guidance: Indicate whether you want brief or detailed responses
4. Clarify ambiguities: Help resolve unclear points when you receive multiple options
Best Practices Summary
- Understand context limitations: Recognize that AI tools have finite context windows and prioritize information accordingly
- Provide relevant context: Share code snippets, error messages, and project details that matter for your specific question
- Be specific in requests: Clear, direct instructions yield better results than vague questions
- Break complex tasks into steps: Iterative approaches often work better for complex problems
- Request explanations: Ask the AI to explain generated code or concepts you don’t understand
- Use formatting for clarity: Structure your prompts with clear sections and formatting
- Reference documentation: When working with specific libraries like OnchainKit, share relevant documentation
- Test and validate: Always review and test AI-generated code before implementing
- Build on previous context: Refer to earlier parts of your conversation when iterating
- Provide feedback: Let the AI know what worked and what didn’t to improve future responses