Skip to content

Comments

feat: user cost#126

Open
wjiayis wants to merge 6 commits intostagingfrom
feat/user-cost
Open

feat: user cost#126
wjiayis wants to merge 6 commits intostagingfrom
feat/user-cost

Conversation

@wjiayis
Copy link
Member

@wjiayis wjiayis commented Feb 19, 2026

#58

Frontend

Screenshot 2026-02-21 at 4 27 22 PM

Database

docker exec mongodb mongosh --quiet paperdebugger --eval "db.llm_sessions.find().forEach(d => printjson(d))"

{
  _id: ObjectId('699950cddf3315112ae7e9bb'),
  user_id: ObjectId('6975da46d6096ac1b07342c2'),
  session_start: ISODate('2026-02-21T06:29:33.980Z'),
  session_expiry: ISODate('2026-02-21T11:29:33.980Z'),
  prompt_tokens: Long('27834'),
  completion_tokens: Long('917'),
  total_tokens: Long('28751'),
  request_count: Long('6')
}

Questions

Are we going to implement a per-user usage cap?

@wjiayis wjiayis self-assigned this Feb 19, 2026
@wjiayis wjiayis added the enhancement New feature or request label Feb 19, 2026
@wjiayis wjiayis changed the base branch from main to staging February 19, 2026 12:00
@wjiayis wjiayis force-pushed the feat/user-cost branch 3 times, most recently from b9bda9c to 4206592 Compare February 21, 2026 06:43
@wjiayis wjiayis marked this pull request as ready for review February 21, 2026 08:26
@wjiayis wjiayis requested review from 4ndrelim and Junyi-99 and removed request for Junyi-99 February 21, 2026 08:26
@Junyi-99
Copy link
Member

Hi @wjiayis , thanks for another great contribution!

regarding your question: yes, we do need a per-user usage cap.

thanks again!

@wjiayis
Copy link
Member Author

wjiayis commented Feb 22, 2026

@Junyi-99 No problem! Yea cool, feel free let me know the token limit you decide to have! I feel that having some sort of progress bar would make more sense than the absolute token number.

@Junyi-99
Copy link
Member

Junyi-99 commented Feb 22, 2026

@wjiayis yeah, a progress bar is much more intuitive than raw numbers.

Since models vary in pricing, let's implement a USD-based cap: Per session: $1, Per week: $2, Per month: $3

Since reasoning models make it hard to calculate exact overhead, let’s also add a small disclaimer in the UI mentioning that the usage is an estimate. We'll run this for a bit and adjust based on the results.

Also, we should support different limits for different users (e.g., tiered usage caps).

What are your thoughts on this tiered pricing approach? Any other suggestions for the plan?

Thanks!

Junyi-99
Junyi-99 previously approved these changes Feb 22, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request implements user-level token usage tracking to address Issue #58, enabling the system to distinguish between heavy and light users and track LLM token consumption per user. The implementation adds session-based tracking (5-hour windows) and weekly aggregation, with both backend services and a frontend UI to display usage statistics.

Changes:

  • Added usage tracking service that records token consumption from OpenAI API responses and stores them in MongoDB with session-based aggregation
  • Created new gRPC/REST API endpoints to retrieve current session and weekly usage statistics
  • Implemented frontend Usage tab to display token consumption metrics with auto-refresh capability

Reviewed changes

Copilot reviewed 28 out of 29 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
proto/usage/v1/usage.proto Defines protobuf messages and service for usage tracking API
pkg/gen/api/usage/v1/*.go Generated Go code for usage service (gRPC server, gateway, protobuf messages)
webapp/_webapp/src/pkg/gen/apiclient/usage/v1/usage_pb.ts Generated TypeScript protobuf types for frontend
internal/models/usage.go MongoDB model for LLM session tracking
internal/services/usage.go Core usage tracking service with session management and weekly aggregation
internal/api/usage/*.go API handlers for session and weekly usage endpoints
internal/libs/db/db.go Database index creation for TTL and efficient session lookups
internal/services/toolkit/client/completion_v2.go Integration to capture usage data from OpenAI streaming responses
internal/services/toolkit/client/utils_v2.go Enables usage reporting in stream options
internal/services/toolkit/client/client_v2.go Dependency injection for usage service
internal/services/toolkit/client/get_conversation_title_v2.go Updated to pass userID for usage tracking
internal/services/toolkit/client/get_citation_keys.go Updated to pass userID for usage tracking
internal/wire.go, internal/wire_gen.go Dependency injection wiring for usage service
internal/api/server.go, internal/api/grpc.go Registration of usage service endpoints
webapp/_webapp/src/views/usage/index.tsx Frontend UI component displaying usage statistics
webapp/_webapp/src/query/*.ts React Query hooks and API client functions for usage endpoints
webapp/_webapp/src/paperdebugger.tsx Added Usage tab to main navigation
pkg/gen/api/chat/v2/chat.pb.go Generated code formatting change (import order)
internal/api/chat/create_conversation_message_stream_v2.go Updated to pass userID for usage tracking
internal/services/toolkit/client/get_citation_keys_test.go Test setup updated with usage service dependency
go.sum Updated Go module dependencies

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +102 to +117
if chunk.Usage.TotalTokens > 0 {
// Record usage asynchronously to avoid blocking the response
go func(usage services.UsageRecord) {
bgCtx := context.Background()
if err := a.usageService.RecordUsage(bgCtx, usage); err != nil {
a.logger.Error("Failed to store usage", "error", err)
return
}

}(services.UsageRecord{
UserID: userID,
PromptTokens: chunk.Usage.PromptTokens,
CompletionTokens: chunk.Usage.CompletionTokens,
TotalTokens: chunk.Usage.TotalTokens,
})
}
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The usage recording happens asynchronously in a goroutine with a background context (line 105). If the application shuts down, these goroutines may be terminated before they complete, leading to lost usage data. Consider using a context with a short timeout derived from the request context, or implementing a graceful shutdown mechanism that waits for pending usage records to be written. Alternatively, add buffering or a queue mechanism to ensure usage data is not lost during shutdown.

Copilot uses AI. Check for mistakes.
Copy link
Member

@4ndrelim 4ndrelim Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a good to have. Not strictly necessary, nor might it be worth the effort. An edge case and if it does happen, i guess we can afford to lose some tokens tracking.

Comment on lines +1 to +175
package services

import (
"context"
"time"

"paperdebugger/internal/libs/cfg"
"paperdebugger/internal/libs/db"
"paperdebugger/internal/libs/logger"
"paperdebugger/internal/models"

"go.mongodb.org/mongo-driver/v2/bson"
"go.mongodb.org/mongo-driver/v2/mongo"
"go.mongodb.org/mongo-driver/v2/mongo/options"
)

const SessionDuration = 5 * time.Hour

type UsageService struct {
BaseService
sessionCollection *mongo.Collection
}

type UsageRecord struct {
UserID bson.ObjectID
PromptTokens int64
CompletionTokens int64
TotalTokens int64
}

type UsageStats struct {
PromptTokens int64 `bson:"prompt_tokens"`
CompletionTokens int64 `bson:"completion_tokens"`
TotalTokens int64 `bson:"total_tokens"`
RequestCount int64 `bson:"request_count"`
SessionCount int64 `bson:"session_count"`
}

func NewUsageService(db *db.DB, cfg *cfg.Cfg, logger *logger.Logger) *UsageService {
base := NewBaseService(db, cfg, logger)
return &UsageService{
BaseService: base,
sessionCollection: base.db.Collection((models.LLMSession{}).CollectionName()),
}
}

// RecordUsage updates the active session or creates a new one if none exists.
// Falls back to update if insert fails (handles race when another request created a session).
func (s *UsageService) RecordUsage(ctx context.Context, record UsageRecord) error {
now := time.Now()
nowBson := bson.DateTime(now.UnixMilli())

filter := bson.M{
"user_id": record.UserID,
"session_expiry": bson.M{"$gt": nowBson},
}
update := bson.M{
"$inc": bson.M{
"prompt_tokens": record.PromptTokens,
"completion_tokens": record.CompletionTokens,
"total_tokens": record.TotalTokens,
"request_count": 1,
},
}

result, err := s.sessionCollection.UpdateOne(ctx, filter, update)
if err != nil {
return err
}
if result.MatchedCount > 0 {
return nil
}

// No active session found - create a new one
session := models.LLMSession{
ID: bson.NewObjectID(),
UserID: record.UserID,
SessionStart: nowBson,
SessionExpiry: bson.DateTime(now.Add(SessionDuration).UnixMilli()),
PromptTokens: record.PromptTokens,
CompletionTokens: record.CompletionTokens,
TotalTokens: record.TotalTokens,
RequestCount: 1,
}
_, err = s.sessionCollection.InsertOne(ctx, session)
if err != nil {
// Insert failed (race condition or other error) - retry update
_, err = s.sessionCollection.UpdateOne(ctx, filter, update)
}
return err
}

// GetActiveSession returns the current active session for a user, if any.
func (s *UsageService) GetActiveSession(ctx context.Context, userID bson.ObjectID) (*models.LLMSession, error) {
now := bson.DateTime(time.Now().UnixMilli())
filter := bson.M{
"user_id": userID,
"session_expiry": bson.M{"$gt": now},
}

var session models.LLMSession
err := s.sessionCollection.FindOne(ctx, filter).Decode(&session)
if err == mongo.ErrNoDocuments {
return nil, nil
}
if err != nil {
return nil, err
}
return &session, nil
}

// GetWeeklyUsage returns aggregated usage for a user for the current week (Monday-Sunday).
func (s *UsageService) GetWeeklyUsage(ctx context.Context, userID bson.ObjectID) (*UsageStats, error) {
weekStart := startOfWeek(time.Now())
return s.getUsageSince(ctx, userID, weekStart)
}

func (s *UsageService) getUsageSince(ctx context.Context, userID bson.ObjectID, since time.Time) (*UsageStats, error) {
pipeline := bson.A{
bson.M{"$match": bson.M{
"user_id": userID,
"session_start": bson.M{"$gte": bson.DateTime(since.UnixMilli())},
}},
bson.M{"$group": bson.M{
"_id": nil,
"prompt_tokens": bson.M{"$sum": "$prompt_tokens"},
"completion_tokens": bson.M{"$sum": "$completion_tokens"},
"total_tokens": bson.M{"$sum": "$total_tokens"},
"request_count": bson.M{"$sum": "$request_count"},
"session_count": bson.M{"$sum": 1},
}},
}

cursor, err := s.sessionCollection.Aggregate(ctx, pipeline)
if err != nil {
return nil, err
}
defer cursor.Close(ctx)

if cursor.Next(ctx) {
var result UsageStats
if err := cursor.Decode(&result); err != nil {
return nil, err
}
return &result, nil
}
return &UsageStats{}, nil
}

// startOfWeek returns the start of the week (Monday 00:00:00 UTC).
func startOfWeek(t time.Time) time.Time {
t = t.UTC()
daysFromMonday := (int(t.Weekday()) + 6) % 7 // Sunday=6, Monday=0, Tuesday=1, ...
return time.Date(t.Year(), t.Month(), t.Day()-daysFromMonday, 0, 0, 0, 0, time.UTC)
}

// ListRecentSessions returns the most recent sessions for a user.
func (s *UsageService) ListRecentSessions(ctx context.Context, userID bson.ObjectID, limit int64) ([]models.LLMSession, error) {
filter := bson.M{"user_id": userID}
opts := options.Find().
SetSort(bson.D{{Key: "session_start", Value: -1}}).
SetLimit(limit)

cursor, err := s.sessionCollection.Find(ctx, filter, opts)
if err != nil {
return nil, err
}
defer cursor.Close(ctx)

var sessions []models.LLMSession
if err := cursor.All(ctx, &sessions); err != nil {
return nil, err
}
return sessions, nil
}
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UsageService lacks test coverage. Other services in this codebase have test files (e.g., token_test.go for TokenService), but there is no usage_test.go. The RecordUsage method has complex logic including race condition handling and the GetWeeklyUsage method has aggregation logic that should be tested. Consider adding tests for: (1) Recording usage to an existing active session, (2) Creating a new session when none exists, (3) Handling race conditions when concurrent requests try to create sessions, (4) Weekly usage aggregation across multiple sessions, and (5) The startOfWeek calculation logic.

Copilot uses AI. Check for mistakes.
Comment on lines +87 to +90
// Insert failed (race condition or other error) - retry update
_, err = s.sessionCollection.UpdateOne(ctx, filter, update)
}
return err
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The race condition handling in RecordUsage has a subtle issue. When an insert fails (line 85-89), it retries with an update. However, the original error from InsertOne is discarded and replaced with the error from the retry UpdateOne. If both the insert and retry update fail, the actual root cause (from the insert) is lost. Consider checking if the insert error is a duplicate key error specifically, and only then retry. If the insert failed for other reasons (e.g., network issues, schema validation), retrying with an update may not be appropriate and could mask the real problem.

Suggested change
// Insert failed (race condition or other error) - retry update
_, err = s.sessionCollection.UpdateOne(ctx, filter, update)
}
return err
// Only retry update on duplicate key (race condition); otherwise return the original error.
if mongo.IsDuplicateKeyError(err) {
_, updateErr := s.sessionCollection.UpdateOne(ctx, filter, update)
if updateErr != nil {
return updateErr
}
return nil
}
return err
}
return nil

Copilot uses AI. Check for mistakes.
Comment on lines +118 to +123
func (s *UsageService) getUsageSince(ctx context.Context, userID bson.ObjectID, since time.Time) (*UsageStats, error) {
pipeline := bson.A{
bson.M{"$match": bson.M{
"user_id": userID,
"session_start": bson.M{"$gte": bson.DateTime(since.UnixMilli())},
}},
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GetWeeklyUsage query filters and sorts by session_start (line 122), but there is no database index defined for this field. The ensureIndexes function in internal/libs/db/db.go only creates indexes on session_expiry and a compound index on (user_id, session_expiry). For efficient weekly usage queries, consider adding an index on (user_id, session_start) to improve query performance as the llm_sessions collection grows.

Copilot uses AI. Check for mistakes.
@wjiayis
Copy link
Member Author

wjiayis commented Feb 23, 2026

@Junyi-99 Sure, a USD-based cap makes a lot of sense!

Also, we should support different limits for different users (e.g., tiered usage caps).

Are employing a freemium model and charge heavy-usage users? I think it's a great way to monetize PaperDebugger, balancing ease of onboarding for new users, API costs, and returns for sustained development/maintenance.

Just off the top of my head, I suggest we could roll this out in 2 phases

  1. We deploy the token counter on the backend but not display it on the frontend, to observe usage distribution
  2. Based on the usage distribution, we launch 3 tiers of usage: Free, Lite and Pro.
    a. Free tier could have the limits of Per session: $1, Per week: $2, Per month: $3
    b. Lite tier could be a reasonable limit that encompass almost all users
    c. Pro tier could be a really high limit for a few users
  3. We could also use ads as alternative income

Alternatively if we decide not to monetize PaperDebugger, I could block all requests past the free tier usage.

Copy link
Member

@4ndrelim 4ndrelim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the good work Jia yi! I haven't yet finished looking through the files, and some parts im not too sure yet, but I will approve first for integration testing on staging and do a deeper review in the meantime before the final PR to main.

import "go.mongodb.org/mongo-driver/v2/bson"

// LLMSession represents a user's session for tracking LLM usage and token counts.
type LLMSession struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this being written to the MongoDB store? Are u writing the usage token into the DB for tracking of weekly usage?

Is there a TTL policy or deletion of tokens tracking data? or do we retain every week token usage data without deleting?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Not super sure if I'm answering your question, but I'm writing 5-hour session data into the DB. Per-session, per-week and per-month usage are calculated from that.
  2. I retain data for 30d after each session expiry for record-keeping purposes and rollback in case we mess something up.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh yup thanks. I didn't inspect every file and change thoroughly so i wasn't entirely sure of the full workflow.

Then may i check if ure creating a new collection in the MongoDB? or updating existing schema?

Comment on lines +102 to +117
if chunk.Usage.TotalTokens > 0 {
// Record usage asynchronously to avoid blocking the response
go func(usage services.UsageRecord) {
bgCtx := context.Background()
if err := a.usageService.RecordUsage(bgCtx, usage); err != nil {
a.logger.Error("Failed to store usage", "error", err)
return
}

}(services.UsageRecord{
UserID: userID,
PromptTokens: chunk.Usage.PromptTokens,
CompletionTokens: chunk.Usage.CompletionTokens,
TotalTokens: chunk.Usage.TotalTokens,
})
}
Copy link
Member

@4ndrelim 4ndrelim Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a good to have. Not strictly necessary, nor might it be worth the effort. An edge case and if it does happen, i guess we can afford to lose some tokens tracking.

}

}(services.UsageRecord{
UserID: userID,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a quick check, you are using the UserID generated by PD backend right?

I noticed there might be an interesting edge case. UserID i believe is generated by user email address on overleaf. Now, if the user logs in via Overleaf, a UserID is generated. If the same user logs in via google account, I think a different UserID for the same user might be generated too (it'll be the same if the same gmail is registered on Overleaf).

Ideally we should recognise and combine / avoid re-generating UserID. But this is a separate problem and requires a fix (if indeed the case) that should not be overloaded on this PR.

If its convenient, could you also test and verify during integration testing? Can try two different login methods on the same Overleaf project and we should expect two tokens usage tracking.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, it's the UserID generated by PD backend. Sure, will take note when testing. I also wonder if a user could just keep switching emails for the same Overleaf project. Shall we have a per-user limit and a per-project limit?

cc: @Junyi-99

Copy link
Member

@4ndrelim 4ndrelim Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah switching out emails will generate a different UserID i believe. Not sure if having a limit on ProjectID is wise because we can have diff collaborators working on the same project.

Edit: Not sure if Overleaf has any safeguards / cooldowns on switching out emails too frequently. But yeah, we can keep this in mind since its a separate problem. Ideally UserID is generated in a way that it is unique and accurately tied to account.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants