HomeAbout MeBook a Call

Optimize Gemini API Cost and Latency with Firestore Caching

By Vo Tu Duc
Published in Cloud Engineering
March 22, 2026
Optimize Gemini API Cost and Latency with Firestore Caching

Integrating LLMs like Gemini into Google Workspace unlocks massive productivity gains, bringing advanced AI directly to the tools you use every day. Discover how to supercharge your automated workflows while navigating the unique challenges of scaling these powerful models.

image 0

The Challenge of Scaling LLMs in Automatically create new folders in Google Drive, generate templates in new folders, fill out text automatically in new files, and save info in Google Sheets

Integrating Large Language Models (LLMs) like Gemini into the AC2F Streamline Your Google Drive Workflow ecosystem—whether through AI Powered Cover Letter Automation Engine, custom Workspace Add-ons, or AI-Powered Invoice Processor—unlocks incredible productivity gains. It allows developers to bring advanced reasoning, summarization, and generative capabilities directly into the tools users rely on daily, such as Docs, Sheets, and Gmail. However, transitioning an LLM integration from a localized proof-of-concept to a production-grade application serving hundreds or thousands of users introduces significant architectural hurdles.

Workspace environments are inherently highly interactive, collaborative, and often operate on shared datasets. Because of this, the way your application handles incoming and outgoing LLM requests directly dictates both its financial viability and its usability. When scaling these integrations, Cloud Engineers consistently run into two primary roadblocks.

High Costs of Redundant API Calls

The financial model of the Gemini API, like most modern LLMs, is metered based on token consumption for both input prompts and output generations. In a typical collaborative Automated Client Onboarding with Google Forms and Google Drive. deployment, query redundancy is practically guaranteed.

Consider a Google Sheets custom function powered by Gemini designed to categorize incoming customer feedback, or a Google Docs add-on that generates standardized compliance summaries. If multiple users on a team run the same analysis on a shared document, or if a user accidentally triggers a sheet recalculation that re-evaluates hundreds of unchanged rows, your application will send identical prompts to the Gemini API.

Without an intelligent architectural intervention, you are effectively paying Google Cloud for the exact same computational work over and over again. As your user base grows and the volume of daily Workspace interactions increases, these redundant API calls compound rapidly.

image 1

Latency Bottlenecks in User Experience

Beyond the financial implications, relying exclusively on live API calls severely degrades the end-user experience. LLM inference is a computationally intensive process; generating a high-quality, nuanced response from Gemini can take anywhere from a few hundred milliseconds to several seconds, depending on the complexity of the prompt and the desired output token count.

In the context of Automated Discount Code Management System, users expect snappy, near-instantaneous feedback. If a user clicks a button in a Gmail Add-on to draft a contextual reply, or waits for a Google Sheets macro to populate a column of data, a multi-second delay breaks their state of flow.

Furthermore, Automated Email Journey with Google Sheets and Google Analytics extension environments have strict execution constraints. For example, Genesis Engine AI Powered Content to Video Production Pipeline enforces a 30-second maximum execution time for custom functions in Sheets, and a 6-minute limit for standard script executions. If your application relies on real-time, synchronous calls to the Gemini API for every single interaction—especially when processing batches of data—you run a high risk of hitting these hard timeouts. This results in failed executions, #ERROR! outputs in spreadsheets, and ultimately, frustrated users who will abandon the tool. Relying solely on real-time inference without a strategy to serve repeated queries instantly is a fundamental bottleneck to scaling any AI-powered Workspace application.

Architectural Solution Using Firestore as a Global Cache

To effectively mitigate the costs and latency associated with repetitive Gemini API calls, we need an intelligent caching layer. The architectural solution revolves around intercepting the prompt before it reaches the Gemini model, checking a globally accessible storage layer for a pre-existing answer, and serving that cached response whenever possible. By implementing Google Cloud Firestore as this global cache, we create a robust, serverless architecture that seamlessly bridges Automated Google Slides Generation with Text Replacement and Google Cloud.

In this design, the execution flow is straightforward but highly effective:

  1. A user or system generates a prompt.

  2. The middleware generates a unique, deterministic hash (such as SHA-256) of the prompt to serve as a lookup key.

  3. The system queries Firestore using this hashed key.

  4. Cache Hit: If the document exists, the cached Gemini response is retrieved and returned instantly. Latency is reduced to milliseconds, and the LLM inference cost is completely bypassed.

  5. Cache Miss: If the document does not exist, the system forwards the prompt to the Gemini API, retrieves the generated response, serves it to the user, and writes the key-value pair back to Firestore for future requests.

Why Firestore for Key Value Storage

While there are dedicated in-memory caching solutions like Redis or Memcached, Cloud Firestore presents a uniquely compelling case for this specific architecture, particularly within the Google Cloud ecosystem. Although Firestore is primarily a NoSQL document database, it excels when utilized as a highly scalable Key-Value store for LLM caching.

  • Serverless and Auto-Scaling: Firestore requires no infrastructure provisioning. It scales automatically from zero to millions of requests, perfectly accommodating the unpredictable burst traffic often associated with generative AI workloads.

  • Global Distribution and Low Latency: Firestore’s multi-region deployment ensures that your cache is highly available and geographically close to your compute resources, minimizing network overhead and keeping response times exceptionally fast.

  • Native TTL (Time-to-Live): LLM responses shouldn’t always live forever, especially if underlying data or models change. Firestore’s native TTL policies allow you to automatically expire and delete cached responses after a specific duration (e.g., 30 days). This keeps your storage costs low and ensures data freshness without the need to write and maintain custom cleanup scripts.

  • Generous Free Tier and Cost Efficiency: Firestore’s pricing model is based on document reads, writes, and storage. Compared to the per-token cost of large context windows in Gemini, a Firestore read operation is infinitesimally cheaper. The generous daily free tier often means your caching layer runs at practically zero cost for small to medium workloads.

The Role of Apps Script as Cache Middleware

To connect the user interface (such as a Google Sheet, Google Doc, or custom Workspace Add-on) to both Firestore and the Gemini API, we need a lightweight, serverless execution environment. Architecting Multi Tenant AI Workflows in Google Apps Script serves as the perfect middleware for this orchestration.

Acting as the proxy between the client and the backend services, Apps Script handles the core business logic of our caching strategy. When a prompt is triggered, the Apps Script environment executes the following critical functions:

  • Prompt Normalization and Hashing: To ensure high cache hit rates, Apps Script normalizes the input (trimming whitespace, standardizing casing) and computes a cryptographic hash using Utilities.computeDigest(). This hash becomes the unique, URL-safe document ID for the Firestore record.

  • Service Orchestration: Apps Script utilizes UrlFetchApp to communicate with the Firestore REST API (or via a dedicated Google OAuth2 library) to check for the existence of the hashed key.

  • Fallback and Write-Back Logic: On a cache miss, Apps Script seamlessly routes the request to the Gemini API. Once the payload is returned, Apps Script parses the JSON response, delivers the output to the Workspace application, and simultaneously executes a write operation back to Firestore to populate the cache for subsequent identical prompts.

By positioning Apps Script as the middleware, we eliminate the need to spin up and maintain external Node.js or JSON-to-Video Automated Rendering Engine servers. It provides a secure, authenticated bridge that inherently understands the Automated Order Processing Wordpress to Gmail to Google Sheets to Jobber context while possessing the capability to interact directly with advanced Google Cloud infrastructure like Firestore and Building Self Correcting Agentic Workflows with Vertex AI.

Designing the Caching Logic

A robust caching mechanism sits at the heart of any cost-optimized LLM architecture. When integrating the Gemini API with Automated Payment Transaction Ledger with Google Sheets and PayPal—whether you are summarizing lengthy email threads in Gmail or generating boilerplate text in Google Docs—users will frequently trigger identical or highly similar requests. Without a caching layer, every single request incurs API costs and introduces generative latency. By leveraging Firestore as a high-speed, NoSQL caching layer, we can intercept these duplicate requests, bypassing the Gemini API entirely.

To achieve this, the caching logic must be deterministic, highly performant, and capable of handling large input payloads without hitting database constraints.

Hashing Workspace Inputs for Exact Matches

In a typical Google Docs to Web integration, a prompt isn’t just a short sentence; it usually consists of a directive combined with a massive payload of context. For example, a user might highlight a 10-page Google Doc and click “Summarize.” Using this raw, concatenated string as a Firestore document ID is an anti-pattern. Not only does it risk exceeding Firestore’s 1,500-byte limit for document IDs, but it is also highly inefficient for database indexing and lookups.

The solution is to generate a cryptographic hash of the input to serve as a unique, fixed-length cache key. To ensure exact matches, you must serialize all variables that influence the Gemini model’s output. This includes:

  • The system instructions.

  • The user’s specific prompt.

  • The Workspace context (the raw text extracted from Docs, Sheets, or Gmail).

  • Model parameters (e.g., temperature, topK, topP).

Using a standard algorithm like SHA-256 ensures that identical inputs always produce the exact same hash, while even a single character difference in the Workspace context will generate a completely different key.

Here is an example of how you might generate this hash in a Node.js Cloud Function:


const crypto = require('crypto');

function generateCacheKey(prompt, workspaceContext, modelConfig) {

// Create a deterministic string representation of the input

const rawInput = JSON.stringify({

prompt: prompt.trim(),

context: workspaceContext.trim(),

config: modelConfig

});

// Generate a SHA-256 hash to use as the Firestore Document ID

return crypto.createHash('sha256').update(rawInput).digest('hex');

}

By hashing the input, you transform megabytes of potential Workspace context into a clean, 64-character string. This guarantees lightning-fast Firestore lookups and completely sidesteps document ID size limits.

Storing and Retrieving Gemini API Responses

With a deterministic cache key in hand, the operational flow of your application shifts from a direct API call to a “Cache-Aside” pattern. Firestore acts as the intermediary between your Workspace add-on and the Gemini API.

The Retrieval Phase (Cache Read)

When a request originates from SocialSheet Streamline Your Social Media Posting 123, your backend first generates the SHA-256 hash. It then performs a direct point-read against Firestore using that hash as the document ID (e.g., db.collection('gemini_cache').doc(hashKey).get()). Point-reads in Firestore are incredibly fast and cost-effective.

  • Cache Hit: If the document exists, your backend immediately returns the stored Gemini response to the Workspace UI. What would normally be a 3-to-5 second wait for generation is reduced to a sub-100 millisecond database read.

  • Cache Miss: If the document does not exist, the application proceeds to call the Gemini API.

The Storage Phase (Cache Write)

Upon a cache miss, the Gemini API processes the prompt and returns the generated text. Before passing this response back to the user, your backend asynchronously writes the payload to Firestore.

When storing the response, it is a cloud engineering best practice to include a timestamp. This allows you to leverage Firestore Time-to-Live (TTL) policies. LLM data can become stale, and storing infinite variations of summarized emails will eventually inflate your Cloud Storage costs. By appending an expiresAt field to your Firestore document, you can configure Google Cloud to automatically delete cache entries after a specific duration (e.g., 7 or 30 days), ensuring your database remains lean and cost-efficient.


async function fetchFromGeminiWithCache(cacheKey, apiPayload) {

const cacheRef = firestore.collection('gemini_cache').doc(cacheKey);

const doc = await cacheRef.get();

if (doc.exists) {

console.log('Cache hit! Returning optimized response.');

return doc.data().response;

}

console.log('Cache miss. Calling Gemini API...');

const geminiResponse = await callGeminiApi(apiPayload); // Your API call logic

// Asynchronously save to Firestore with a 30-day TTL

const expirationDate = new Date();

expirationDate.setDate(expirationDate.getDate() + 30);

await cacheRef.set({

response: geminiResponse,

createdAt: Firestore.FieldValue.serverTimestamp(),

expiresAt: expirationDate

});

return geminiResponse;

}

This dual-action logic ensures that your Workspace application only pays the latency and financial cost of a Gemini API call exactly once per unique input, while Firestore seamlessly handles the heavy lifting for all subsequent identical requests.

Implementing the Tech Stack

To build a highly efficient, cost-effective generative AI pipeline, we need to orchestrate three core components: Google Cloud Firestore for our low-latency caching layer, Google Apps Script as the serverless middleware, and the Gemini API for our foundational model capabilities. By carefully integrating these tools, we can intercept redundant requests before they ever reach the billing threshold of the Gemini API.

Setting Up Firestore Database Rules

Security and access control are paramount when exposing a database to an application middleware. Because our Google Apps Script environment will be communicating with Firestore, we must ensure that our database rules strictly govern who can read and write to our caching collection.

If your Apps Script project is utilizing a Google Cloud Service Account with the appropriate IAM roles (e.g., Cloud Datastore User) to authenticate via the REST API, IAM policies will naturally supersede standard client rules. However, if you are using Identity-Aware Proxy (IAP) or standard client-side authentication tokens, you must explicitly define your firestore.rules.

Navigate to the Firebase Console or your local Firebase project directory and configure the rules for your gemini_cache collection:


rules_version = '2';

service cloud.firestore {

match /databases/{database}/documents {

// Restrict access to the caching collection

match /gemini_cache/{document=**} {

// Only allow read/write if the request is authenticated

// In a strict enterprise environment, validate specific claims or service account IDs

allow read, write: if request.auth != null;

// Optional: Enforce schema validation to ensure only valid cache objects are written

allow write: if request.resource.data.keys().hasAll(['promptHash', 'response', 'timestamp'])

&& request.resource.data.response is string;

}

}

}

These rules ensure that unauthorized external actors cannot pollute your cache or scrape your previously generated Gemini responses, keeping your intellectual property and data secure.

Writing the Apps Script Middleware

Google Apps Script serves as the perfect serverless glue between SocialSheet Streamline Your Social Media Posting (like Docs or Sheets) and Google Cloud. Our middleware needs to perform three tasks: receive the prompt, generate a deterministic hash of that prompt to use as a Firestore document ID, and check if that document already exists.

To create a unique, URL-safe document ID, we will use the SHA-256 algorithm. This ensures that identical prompts always generate the exact same cache key.

Here is the core Apps Script logic for the caching middleware:


/**

* Generates a SHA-256 hash from the input prompt to use as a Firestore Document ID.

*/

function generateCacheKey(prompt) {

const signature = Utilities.computeDigest(Utilities.DigestAlgorithm.SHA_256, prompt.trim().toLowerCase());

return signature.map(b => (b < 0 ? b + 256 : b).toString(16).padStart(2, '0')).join('');

}

/**

* Checks Firestore for a cached response.

* Note: This assumes the use of a Firestore Apps Script library (e.g., FirestoreApp).

*/

function getCachedResponse(prompt) {

const firestore = getFirestoreConnection(); // Helper function to init Firestore

const cacheKey = generateCacheKey(prompt);

try {

const document = firestore.getDocument(`gemini_cache/${cacheKey}`);

if (document && document.fields && document.fields.response) {

console.log(`Cache HIT for key: ${cacheKey}`);

return document.fields.response.stringValue;

}

} catch (e) {

// Document not found or error occurred

console.log(`Cache MISS for key: ${cacheKey}`);

return null;

}

}

/**

* Saves a new Gemini response to Firestore.

*/

function cacheResponse(prompt, responseText) {

const firestore = getFirestoreConnection();

const cacheKey = generateCacheKey(prompt);

const data = {

promptHash: cacheKey,

response: responseText,

timestamp: new Date().toISOString()

};

firestore.createDocument(`gemini_cache/${cacheKey}`, data);

}

By converting the prompt to lowercase and trimming whitespace before hashing, we increase our cache hit rate by neutralizing minor, insignificant formatting differences in the user’s input.

Connecting to the Gemini API

When our middleware registers a cache miss, it must seamlessly fall back to the Gemini API, retrieve the generated content, and then store that content in Firestore for future requests.

We will use Apps Script’s native UrlFetchApp service to make the HTTP POST request to the Gemini endpoint. For security, ensure your API key is stored in the Apps Script PropertiesService rather than hardcoded in your script.


/**

* Orchestrates the cache check, Gemini API call, and cache storage.

*/

function generateText(prompt) {

// 1. Check the Firestore Cache

const cachedResponse = getCachedResponse(prompt);

if (cachedResponse) {

return cachedResponse; // Return early, saving cost and latency

}

// 2. Cache Miss: Call the Gemini API

const apiKey = PropertiesService.getScriptProperties().getProperty('GEMINI_API_KEY');

const url = `https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent?key=${apiKey}`;

const payload = {

contents: [{

parts: [{ text: prompt }]

}],

generationConfig: {

temperature: 0.2 // Lower temperature for more deterministic caching

}

};

const options = {

method: 'post',

contentType: 'application/json',

payload: JSON.stringify(payload),

muteHttpExceptions: true

};

const response = UrlFetchApp.fetch(url, options);

const json = JSON.parse(response.getContentText());

if (response.getResponseCode() !== 200) {

throw new Error(`Gemini API Error: ${json.error.message}`);

}

const generatedText = json.candidates[0].content.parts[0].text;

// 3. Save the new response to Firestore

cacheResponse(prompt, generatedText);

return generatedText;

}

Notice the temperature setting in the payload. When designing a system heavily reliant on caching, lowering the temperature (e.g., 0.2) is a highly recommended Cloud Engineering practice. It forces the LLM to provide more deterministic, factual responses, which are generally safer and more valuable to cache across multiple users than highly creative, highly variable outputs.

Measuring Efficiency Gains

Implementing a Firestore caching layer for the Gemini API sounds excellent in theory, but the true value of any cloud architecture decision lies in the telemetry. To validate our caching strategy, we need to quantify the improvements across two critical dimensions: the impact on our Google Cloud billing and the speed at which our application serves users. Let’s break down how to measure and understand these tangible benefits.

Cost Reduction Analysis

Generative AI models, while incredibly powerful, operate on a pay-per-token pricing model. Every time an application queries the Gemini API, costs are incurred for both the input prompt tokens and the generated output tokens. For applications with high traffic or repetitive queries—such as customer support bots, automated report generators, or standard data extraction tools—these inference costs can scale exponentially.

Enter Firestore. Google Cloud’s serverless NoSQL document database charges primarily based on document reads, writes, and storage. The cost of a single Firestore document read is infinitesimally small compared to a complex Gemini API generation request.

Let’s look at a practical calculation. Suppose your application processes 100,000 queries a month, and each Gemini API call costs an average of $0.01 (accounting for rich context windows and lengthy outputs). Without caching, your monthly API cost is $1,000.

If you implement a Firestore cache and achieve a modest 30% cache hit rate (meaning 30,000 queries are served directly from the database), you bypass the Gemini API entirely for those requests. The cost for 30,000 Firestore reads is just a few cents (often falling entirely within the generous GCP free tier, or costing roughly $0.0108 at standard rates). Your monthly AI bill drops to $700. This represents a clean 30% cost reduction, effectively turning repetitive user queries from a financial liability into a highly optimized asset.

Latency Improvements and Performance Metrics

While cost optimization keeps the finance team happy, latency reduction is what keeps your users engaged. Large Language Models inherently introduce latency; generating high-quality, context-aware text requires heavy compute, often resulting in response times ranging from 1 to 5 seconds depending on the prompt’s complexity and the output token count. In modern web and mobile applications, a multi-second delay can lead to a degraded user experience and increased bounce rates.

Firestore, conversely, is engineered for rapid, global data retrieval. A well-indexed Firestore query typically returns a cached response in under 50 milliseconds. When a cache hit occurs, the application bypasses the LLM inference step entirely, resulting in a near-instantaneous response to the client.

To accurately measure these performance gains, you should leverage Google Cloud Monitoring (formerly Stackdriver) to track the following key metrics:

  • Cache Hit Ratio (CHR): The percentage of total requests served from Firestore versus those routed to the Gemini API. Tracking this metric over time helps you understand the effectiveness of your cache key generation (e.g., how you hash the prompts). A higher CHR directly correlates with greater cost and latency savings.

  • P90 and P99 Latency: Track the response times for the 90th and 99th percentiles. Once caching is implemented, you will likely notice a distinct bimodal distribution in your latency metrics: cache hits will cluster tightly in the low milliseconds, while cache misses will reflect standard Gemini API latency.

  • **Time to First Byte (TTFB): For applications that stream responses, measure how quickly the user receives the first piece of data. While Gemini supports streaming to mitigate perceived latency, a cached Firestore response can deliver the entire payload immediately, drastically improving the UX.

By instrumenting your application to log these specific metrics, you can continuously tune your cache keys, adjust Time-to-Live (TTL) eviction policies, and maximize the efficiency of your Google Cloud architecture.

Next Steps for Your AI Architecture

Implementing a Firestore-backed cache for your Gemini API requests is a massive leap forward in optimizing both cost and latency. However, as your application’s user base grows and your generative AI workloads become more complex, your infrastructure must evolve to handle the increased demand. Transitioning from a functional prototype to an enterprise-grade AI architecture requires a strategic approach to scaling and performance tuning.

Scaling Beyond Basic Caching

While exact-match caching in Firestore is highly effective for repetitive queries, modern AI applications often require more nuanced data retrieval strategies. To truly maximize the efficiency of your Google Cloud environment, consider implementing the following advanced architectural patterns:

  • **Semantic Caching with Vertex AI Vector Search: An exact-match cache fails when a user asks the same question with slightly different wording. By generating embeddings for your incoming prompts and storing them in Vertex AI Vector Search (formerly Matching Engine) alongside your Firestore document IDs, you can implement semantic caching. This allows your system to retrieve cached Gemini responses for conceptually similar queries, drastically increasing your cache hit rate.

  • Multi-Tiered Caching (L1/L2): For ultra-low latency requirements, introduce Cloud Memorystore (Redis) as a Layer 1 (L1) in-memory cache. In this architecture, your application first checks Memorystore for lightning-fast retrieval. If a cache miss occurs, it falls back to Firestore (Layer 2) before finally making a costly call to the Gemini API.

  • Automated Lifecycle Management (TTL): AI models evolve, and cached responses can become stale or factually outdated. Leverage Firestore Time-to-Live (TTL) policies to automatically delete older cached responses after a specified duration (e.g., 30 days). This ensures your users always receive relatively fresh AI-generated content while simultaneously keeping your Firestore storage costs at an absolute minimum.

  • Asynchronous Cache Warming: If you can anticipate user queries—such as generating daily summaries or pre-computing responses for trending topics—use Cloud Scheduler and Cloud Tasks to asynchronously call the Gemini API during off-peak hours. You can then proactively populate your Firestore cache, ensuring zero-latency responses for your users during peak traffic times.

Book a GDE Discovery Call with Vo Tu Duc

Navigating the intricacies of generative AI, Google Cloud infrastructure, and Speech-to-Text Transcription Tool with Google Workspace integrations can be challenging. Whether you are struggling with unpredictable Gemini API billing, latency bottlenecks, or architecting a globally scalable backend, expert guidance can save you months of trial and error.

Take your cloud architecture to the next level by booking a discovery call with Vo Tu Duc, a recognized Google Developer Expert (GDE). During this focused session, you will get the opportunity to:

  • Audit Your Current Architecture: Review your existing GCP setup to identify performance bottlenecks and security vulnerabilities.

  • Optimize Cloud Spend: Discover advanced cost-saving strategies across Firestore, Vertex AI, and serverless compute resources.

  • Accelerate AI Integration: Get tailored, actionable advice on seamlessly integrating Gemini and other Google AI models into your specific application or Workspace environment.

Don’t leave your application’s performance to chance. Connect with a GDE to ensure your AI architecture is robust, cost-effective, and built to scale.


Tags

Gemini APIFirestore CachingGoogle WorkspaceCost OptimizationAPI LatencyLLM ScalingGoogle Apps Script

Share


Previous Article
Scaling AI Powered Analytics Dashboards from Google Sheets to Firestore
Vo Tu Duc

Vo Tu Duc

A Google Developer Expert, Google Cloud Innovator

Stop Doing Manual Work. Scale with AI.

Hi, I'm Vo Tu Duc (Danny), a recognised Google Developer Expert (GDE). I architect custom AI agents and Google Workspace solutions that help businesses eliminate chaos and save thousands of hours.

Want to turn these blog concepts into production-ready reality for your team?
Book a Discovery Call

Table Of Contents

1
The Challenge of Scaling LLMs in Automatically create new folders in Google Drive, generate templates in new folders, fill out text automatically in new files, and save info in Google Sheets
2
Architectural Solution Using Firestore as a Global Cache
3
Designing the Caching Logic
4
Implementing the Tech Stack
5
Measuring Efficiency Gains
6
Next Steps for Your AI Architecture

Portfolios

AI Agentic Workflows
Change Management
AppSheet Solutions
Strategy Playbooks
Cloud Engineering
Product Showcase
Uncategorized
Workspace Automation

Related Posts

Build a Retail Price Match Alert Agent Using Gemini and Apps Script
March 29, 2026
© 2026, All Rights Reserved.
Powered By

Quick Links

Book a CallAbout MeVolunteer Legacy

Social Media