The Problem
Letting anonymous users hit AWS Bedrock directly is an obvious cost risk. The portfolio apps use a lightweight daily-budget model that limits how many AI calls can be made per day, across all users.
The Model
class DailyAIUsage(models.Model):
date = models.DateField(unique=True)
count = models.IntegerField(default=0)
limit = models.IntegerField(default=50)
@classmethod
def get_or_create_today(cls):
today = timezone.localdate()
obj, _ = cls.objects.get_or_create(date=today)
return obj
def can_generate(self) -> bool:
return self.count < self.limit
def increment(self):
self.count = models.F("count") + 1
self.save(update_fields=["count"])
Using It in a View
def generate_ai_image(request):
usage = DailyAIUsage.get_or_create_today()
if not usage.can_generate():
return JsonResponse(
{"error": "Daily AI limit reached. Try again tomorrow."},
status=429
)
prompt = request.POST.get("prompt", "")
image_url = bedrock.generate_image(prompt)
usage.increment()
return JsonResponse({"url": image_url})
Tip: Using
models.F("count") + 1 instead of self.count += 1 performs the increment atomically in the database, preventing race conditions under concurrent requests.The AWS Variant
The waes_chat_e app has a second predict endpoint that routes to AWS Bedrock instead of the local model. It uses the same DailyAIUsage check with a lower limit (10 calls/day) because Bedrock invocations cost real money.
BEDROCK_DAILY_LIMIT = 10
@csrf_exempt
def predict_aws(request):
usage = DailyAIUsage.get_or_create_today()
if usage.count >= BEDROCK_DAILY_LIMIT:
return JsonResponse({"error": "Bedrock limit reached"}, status=429)
# ... invoke bedrock ...
usage.increment()