Best Practices Guide
Optimization tips, data preparation guidelines, and troubleshooting advice to get the most out of BrainPredict Sales.
Data Preparation
Clean Your CRM Data
AI models perform best with clean, accurate data. Before connecting BrainPredict Sales:
- Remove duplicate records (deals, leads, contacts)
- Standardize field values (e.g., "Negotiation" vs "Negotiating")
- Fill in missing critical fields (deal value, stage, close date)
- Archive old/inactive records (older than 3 years)
- Validate email addresses and phone numbers
Minimum Data Requirements
For optimal AI model performance, ensure you have:
- Deals: Minimum 500 historical deals (1,000+ recommended)
- Leads: Minimum 1,000 historical leads (5,000+ recommended)
- Time Range: At least 12 months of historical data (24+ months recommended)
- Outcomes: Both won and lost deals for accurate predictions
Required Fields
Ensure these fields are populated for all records:
# Deal Prediction - opportunity_id (required) - deal_value (required) - stage (required) - close_date (required) - days_in_stage (recommended) - contact_count (recommended) - last_activity_days (recommended) # Lead Scoring - lead_id (required) - company_size (required) - industry (required) - budget (recommended) - engagement_score (recommended) - source (recommended)
Model Optimization
Custom Model Training
Enterprise customers can request custom model training for improved accuracy:
- Requires minimum 10,000 historical records
- Training takes 2-4 weeks
- Typically improves accuracy by 3-5%
- Models are retrained quarterly with new data
Feature Engineering
Add custom features to improve predictions:
# Example: Add custom features
prediction = client.predict_deal(
opportunity_id='opp-123',
deal_value=50000,
stage='Negotiation',
# Standard features
days_in_stage=15,
contact_count=3,
# Custom features
custom_features={
'executive_sponsor': True,
'budget_approved': True,
'competitor_count': 2,
'product_demo_completed': True,
'technical_validation_passed': True
}
)
# Custom features can improve accuracy by 5-10%Batch Processing
For large datasets, use batch processing for better performance:
# Process 100 deals at once (10x faster than individual requests)
predictions = client.predict_deal_batch(
deals=[
{'opportunity_id': 'opp-1', 'deal_value': 50000, ...},
{'opportunity_id': 'opp-2', 'deal_value': 75000, ...},
# ... up to 100 deals
]
)
# Batch processing is 10x faster and counts as 1 API requestPerformance Optimization
Caching
Cache predictions to reduce API calls and improve response time:
import redis
cache = redis.Redis(host='localhost', port=6379)
def get_prediction_cached(opportunity_id):
# Check cache first
cached = cache.get(f'prediction:{opportunity_id}')
if cached:
return json.loads(cached)
# If not in cache, call API
prediction = client.predict_deal(opportunity_id=opportunity_id)
# Cache for 1 hour
cache.setex(f'prediction:{opportunity_id}', 3600, json.dumps(prediction))
return predictionWebhooks
Use webhooks for real-time updates instead of polling:
# Configure webhook endpoint
client.configure_webhook(
url='https://your-app.com/webhooks/brainpredict',
events=['deal.predicted', 'lead.scored', 'churn.detected'],
secret='your-webhook-secret'
)
# Your webhook handler
@app.route('/webhooks/brainpredict', methods=['POST'])
def handle_webhook():
payload = request.json
if payload['event'] == 'deal.predicted':
# Update your database with new prediction
update_deal_prediction(payload['data'])
return {'status': 'success'}Rate Limit Management
Implement exponential backoff for rate limit errors:
import time
def predict_with_retry(opportunity_id, max_retries=3):
for attempt in range(max_retries):
try:
return client.predict_deal(opportunity_id=opportunity_id)
except RateLimitError as e:
if attempt == max_retries - 1:
raise
# Exponential backoff: 1s, 2s, 4s
wait_time = 2 ** attempt
time.sleep(wait_time)
raise Exception('Max retries exceeded')Troubleshooting
Low Prediction Accuracy
Symptoms: Predictions don't match actual outcomes
Solutions:
- Ensure you have at least 500 historical deals
- Check for data quality issues (missing fields, duplicates)
- Verify field mappings are correct
- Request custom model training (Enterprise only)
- Add more custom features to improve predictions
Slow API Response
Symptoms: API calls take longer than 2 seconds
Solutions:
- Use batch processing for multiple predictions
- Implement caching to reduce API calls
- Use webhooks instead of polling
- Upgrade to higher tier for better performance
- Contact support if issue persists
CRM Sync Issues
Symptoms: Data not syncing between CRM and BrainPredict Sales
Solutions:
- Check CRM credentials are valid
- Verify field mappings are correct
- Check CRM API rate limits
- Review sync logs in portal
- Contact support with sync log details
Security Best Practices
API Key Management
- Store API keys in environment variables, never in code
- Rotate API keys every 90 days
- Use separate keys for development and production
- Revoke compromised keys immediately
- Monitor API usage for suspicious activity
Data Privacy
- All data is encrypted in transit (TLS 1.3) and at rest (AES-256)
- Data is isolated per tenant (no cross-tenant access)
- GDPR, CCPA, and SOC 2 compliant
- Data retention: 90 days (configurable)
- Right to deletion: Contact support