Search our articles
Search

Featured articles

Food & Beverage Self-Service

How Food & Beverage Brands Can Level Up Self-Service Before BFCM

Before the BFCM rush begins, we’re serving food & beverage CX teams seven easy self-serve upgrades to keep support tickets off their plate.
By Alexa Hertel
0 min read . By Alexa Hertel

TL;DR:

  • Most food & beverage support tickets during BFCM are predictable. Subscription cancellations, WISMO, and product questions make up the bulk—so prep answers ahead of time.
  • Proactive CX site updates can drastically cut down repetitive tickets. Add ingredient lists, cooking instructions, and clear refund policies to product pages and FAQs.
  • FAQ pages should go deep, not just broad. Answer hyper-specific questions like “Will this break my fast?” to help customers self-serve without hesitation.
  • Transparency about stock reduces confusion and cart abandonment. Show inventory levels, set up waitlists, and clearly state cancellation windows.

In 2024, Shopify merchants drove $11.5 billion in sales over Black Friday Cyber Monday. Now, BFCM is quickly approaching, with some brands and major retailers already hosting sales.

If you’re feeling late to prepare for the season or want to maximize the number of sales you’ll make, we’ll cover how food and beverage CX teams can serve up better self-serve resources for this year’s BFCM. 

Learn how to answer and deflect customers’ top questions before they’re escalated to your support team.

💡 Your guide to everything peak season → The Gorgias BFCM Hub

Handling BFCM as a food & beverage brand

During busy seasons like BFCM and beyond, staying on top of routine customer asks can be an extreme challenge. 

“Every founder thinks BFCM is the highest peak feeling of nervousness,” says Ron Shah, CEO and Co-founder of supplement brand Obvi

“It’s a tough week. So anything that makes our team’s life easier instantly means we can focus more on things that need the time,” he continues. 

Anticipating contact reasons and preparing methods (like automated responses, macros, and enabling an AI Agent) is something that can help. Below, find the top contact reasons for food and beverage companies in 2025. 

Top contact reasons in the food & beverage industry 

According to Gorgias proprietary data, the top reason customers reach out to brands in the food and beverage industry is to cancel a subscription (13%) followed by order status questions (9.1%).

Contact Reason

% of Tickets

🍽️ Subscription cancellation

13%

🚚 Order status (WISMO)

9.1%

❌ Order cancellation

6.5%

🥫 Product details

5.7%

🧃 Product availability

4.1%

⭐ Positive feedback

3.9%

7 ways to improve your self-serve resources before BFCM

  1. Add informative blurbs on product pages 
  2. Craft additional help center and FAQ articles 
  3. Automate responses with AI or Macros 
  4. Get specific about product availability
  5. Provide order cancellation and refund policies upfront
  6. Add how-to information
  7. Build resources to help with buying decisions 

1) Add informative blurbs on product pages

Because product detail queries represent 5.7% of contact reasons for the food and beverage industry, the more information you provide on your product pages, the better. 

Include things like calorie content, nutritional information, and all ingredients.  

For example, ready-to-heat meal company The Dinner Ladies includes a dropdown menu on each product page for further reading. Categories include serving instructions, a full ingredient list, allergens, nutritional information, and even a handy “size guide” that shows how many people the meal serves. 

The Dinner Ladies product page showing parmesan biscuits with tapenade and mascarpone.
The Dinner Ladies includes a drop down menu full of key information on its product pages. The Dinner Ladies

2) Craft additional Help Center and FAQ articles

FAQ pages make up the information hub of your website. They exist to provide customers with a way to get their questions answered without reaching out to you.   

This includes information like how food should be stored, how long its shelf life is, delivery range, and serving instructions. FAQs can even direct customers toward finding out where their order is and what its status is. 

Graphic listing benefits of FAQ pages including saving time and improving SEO.

In the context of BFCM, FAQs are all about deflecting repetitive questions away from your team and assisting shoppers in finding what they need faster. 

That’s the strategy for German supplement brand mybacs

“Our focus is to improve automations to make it easier for customers to self-handle their requests. This goes hand in hand with making our FAQs more comprehensive to give customers all the information they need,” says Alexander Grassmann, its Co-Founder & COO.

As you contemplate what to add to your FAQ page, remember that more information is usually better. That’s the approach Everyday Dose takes, answering even hyper-specific questions like, “Will it break my fast?” or “Do I have to use milk?”

Everyday Dose FAQ page showing product, payments, and subscription question categories.
Everyday Dose has an extensive FAQ page that guides shoppers through top questions and answers. Everyday Dose

While the FAQs you choose to add will be specific to your products, peruse the top-notch food and bev FAQ pages below. 

Time for some FAQ inspo:

3) Automate responses with AI or macros

AI Agents and AI-powered Shopping Assistants are easy to set up and are extremely effective in handling customer interactions––especially during BFCM.  

“I told our team we were going to onboard Gorgias AI Agent for BFCM, so a good portion of tickets would be handled automatically,” says Ron Shah, CEO and Co-founder at Obvi. “There was a huge sigh of relief knowing that customers were going to be taken care of.” 

And, they’re getting smarter. AI Agent’s CSAT is just 0.6 points shy of human agents’ average CSAT score. 

Obvi homepage promoting Black Friday sale with 50% off and chat support window open.
Obvi 

Here are the specific responses and use cases we recommend automating

  • WISMO (where is my order) inquiries 
  • Product related questions 
  • Returns 
  • Order issues
  • Cancellations 
  • Discounts, including BFCM related 
  • Customer feedback
  • Account management
  • Collaboration requests 
  • Rerouting complex queries

Get your checklist here: How to prep for peak season: BFCM automation checklist

4) Get specific about product availability

With high price reductions often comes faster-than-usual sell out times. By offering transparency around item quantities, you can avoid frustrated or upset customers. 

For example, you could show how many items are left under a certain threshold (e.g. “Only 10 items left”), or, like Rebel Cheese does, mention whether items have sold out in the past.  

Rebel Cheese product page for Thanksgiving Cheeseboard Classics featuring six vegan cheeses on wood board.
Rebel Cheese warns shoppers that its Thanksgiving cheese board has sold out 3x already. Rebel Cheese  

You could also set up presales, give people the option to add themselves to a waitlist, and provide early access to VIP shoppers. 

5) Provide order cancellation and refund policies upfront 

Give shoppers a heads up whether they’ll be able to cancel an order once placed, and what your refund policies are. 

For example, cookware brand Misen follows its order confirmation email with a “change or cancel within one hour” email that provides a handy link to do so. 

Misen order confirmation email with link to change or cancel within one hour of checkout.
Cookware brand Misen follows up its order confirmation email with the option to edit within one hour. Misen 

Your refund policies and order cancellations should live within an FAQ and in the footer of your website. 

6) Add how-to information 

Include how-to information on your website within your FAQs, on your blog, or as a standalone webpage. That might be sharing how to use a product, how to cook with it, or how to prepare it. This can prevent customers from asking questions like, “how do you use this?” or “how do I cook this?” or “what can I use this with?” etc. 

For example, Purity Coffee created a full brewing guide with illustrations:

Purity Coffee brewing guide showing home drip and commercial batch brewer illustrations.
Purity Coffee has an extensive brewing guide on its website. Purity Coffee

Similarly, for its unique preseasoned carbon steel pan, Misen lists out care instructions

Butter melting in a seasoned carbon steel pan on a gas stove.
Misen 

And for those who want to understand the level of prep and cooking time involved, The Dinner Ladies feature cooking instructions on each product page. 

The Dinner Ladies product page featuring duck sausage rolls with cherry and plum dipping sauce.
The Dinner Ladies feature a how to cook section on product pages. The Dinner Ladies 

7) Build resources to help with buying decisions 

Interactive quizzes, buying guides, and gift guides can help ensure shoppers choose the right items for them––without contacting you first. 

For example, Trade Coffee Co created a quiz to help first timers find their perfect coffee match: 

Trade Coffee Co offers an interactive quiz to lead shoppers to their perfect coffee match. Trade Coffee Co

Set your team up for BFCM success with Gorgias 

The more information you can share with customers upfront, the better. That will leave your team time to tackle the heady stuff. 

If you’re looking for an AI-assist this season, check out Gorgias’s suite of products like AI Agent and Shopping Assistant

{{lead-magnet-2}}

min read.

What is Conversational AI? The Ecommerce Guide

Learn about the different types of conversational AI and its benefits for ecommerce.
By Gorgias Team
0 min read . By Gorgias Team

TL;DR:

  • Conversational AI combines natural language processing, machine learning, and generative AI to create human-like interactions
  • For ecommerce, it automates customer service, drives sales through personalized recommendations, and scales support 24/7
  • Key types include chatbots, voice assistants, and AI agents that handle both support and sales tasks
  • Implementation requires defining clear goals, choosing an ecommerce-ready platform, and connecting your tech stack

Conversational AI changes how ecommerce brands interact with customers by enabling natural, human-like conversations at scale, helping reduce customer churn

Instead of forcing shoppers through rigid menus or making them wait for support, conversational AI understands questions, detects intent, and delivers instant, personalized responses. 

This technology powers everything from customer service chatbots to voice assistants, helping brands automate repetitive tasks while maintaining the personal touch customers expect. 

For ecommerce specifically, it means handling order inquiries, providing product recommendations, and recovering abandoned carts — all without adding headcount.

What is conversational AI?

Conversational AI is a type of artificial intelligence that allows computers to understand, process, and respond to human language through natural, two-way conversations. This means your customers can ask questions in their own words and get helpful answers that feel like they're talking to a real person.

Unlike basic chatbots that only recognize specific keywords, conversational AI actually understands what your customers mean. It can handle typos, slang, and complex questions that have multiple parts. The AI learns from every conversation, getting better at helping your customers over time.

Think of it as having a super-smart team member who never sleeps, never gets frustrated, and remembers every detail about your products and policies. This AI team member can chat with customers on your website, answer questions through social media, or even handle phone calls.

What are the key components of conversational AI?

Conversational AI works because several smart technologies team up to understand and respond to your customers. Each piece has a specific job in making conversations feel natural and helpful.

Natural Language Processing (NLP) is the foundation that breaks down human language into pieces a computer can understand. This means when a customer types "Where's my order?" the AI can identify the important words and grammar structure.

Natural Language Understanding (NLU) figures out what the customer actually wants. This is the smart part that realizes "Where's my order?" means the customer wants to track a shipment, even if they phrase it differently like "I need to check my package status."

Natural Language Generation (NLG) creates responses that sound human and helpful. Instead of robotic answers, it crafts replies that match your brand's voice and provide exactly what the customer needs to know.

The dialog manager keeps track of the entire conversation. This means if a customer asks a follow-up question, the AI remembers what you were just talking about and can give a relevant answer.

Your knowledge base stores all the information the AI needs to help customers. This includes your return policy, product details, shipping information, and any other facts your team would use to answer questions.

How does conversational AI work?

Conversational AI follows a simple three-step process that happens in seconds. Understanding this process helps you see why it's so much more powerful than old-school chatbots.

1) It processes input across voice and text with NLP

When a customer sends a message or asks a question, the AI first needs to understand what they're saying. For text messages from chat, email, or social media, the system breaks down the sentence into individual words and analyzes the grammar.

For voice interactions like phone calls, the AI uses speech recognition to turn spoken words into text first. Modern systems handle different accents, background noise, and natural speech patterns without missing a beat.

2) It detects intent and context with NLU

Once the AI has the customer's words, it needs to figure out what they actually want. The system looks for the customer's intent — their goal or what they're trying to accomplish.

For example, when someone asks "Can I return this sweater I bought last week?" the AI identifies the intent as wanting to make a return. It also pulls out important details like the product type and timeframe.

The AI also uses context from earlier in the conversation. If the customer mentioned their order number earlier, the AI remembers it and can use that information to help with the return request.

3) It generates responses with NLG

After understanding what the customer wants, the AI creates a helpful response. It might pull information from your knowledge base, personalize the answer with the customer's specific details, or generate a completely new response using generative AI.

The system also checks how confident it is in its answer. If the AI isn't sure about something or if the topic is too complex, it knows to hand the conversation over to one of your human agents.

What are the types of conversational AI?

Different types of conversational AI work better for different situations in your ecommerce business. Understanding these types helps you choose the right solution for your customers and team.

Chatbots handle scripted and AI-driven chat

Chatbots are the most common type you'll see on websites and messaging apps. Early chatbots followed strict scripts — if a customer's question didn't match the script exactly, the bot would get confused and give unhelpful answers.

Modern AI-powered chatbots understand natural language and can handle much more complex conversations. The best systems combine both approaches: using simple rules for straightforward questions and AI for everything else.

These chatbots work great for answering common questions about shipping, returns, and product details. They can also help customers find the right products or guide them through your checkout process.

Voice assistants manage speech-based requests

Voice assistants bring conversational AI to phone support and other voice channels. These aren't the old phone trees that made customers press numbers to navigate menus.

Instead, customers can speak naturally and get helpful answers right away. Voice assistants can look up order information, explain your return policy, or even process simple requests like address changes.

This works especially well for customers who prefer calling over typing, or when they need help while their hands are busy.

Read more: How Cornbread Hemp reached a 13.6% phone conversion rate with Gorgias Voice

AI agents and copilots assist teams and customers

AI agents are the most advanced type of conversational AI. Unlike chatbots that mainly provide information, AI agents can actually take action on behalf of customers.

These systems connect to your other business tools like Shopify, your shipping software, or your returns platform. This means they can do things like:

  • Process returns: Start a return and send the customer a shipping label
  • Update orders: Change a shipping address or add items to an existing order
  • Handle refunds: Issue refunds for eligible orders automatically
  • Manage subscriptions: Skip shipments or update subscription preferences

Copilots work alongside your human agents, suggesting responses and pulling up customer information to help resolve issues faster.

Read more: How AI Agent works & gathers data

What are the benefits of conversational AI for ecommerce?

Conversational AI delivers real business results for ecommerce brands. The benefits go beyond just making your support team more efficient — though that's certainly part of it.

24/7 availability means you never miss a sale or support opportunity. Customers can get help at 2 a.m. or during holidays when your team is offline. This is especially valuable for international customers in different time zones.

Instant responses prevent cart abandonment and customer frustration, improving first contact resolution. When someone has a question about sizing or shipping, they get an answer immediately instead of waiting hours or days for an email response.

Personalized interactions at scale drive higher average order values. The AI can recommend products based on what customers are browsing, their purchase history, and their preferences, just like your best salesperson would.

Cost efficiency comes from handling repetitive questions automatically. Your human agents can focus on complex issues, VIP customers, and revenue-generating activities instead of answering the same shipping questions over and over.

Multilingual support helps you serve global customers without hiring native speakers for every language. The AI can communicate in dozens of languages, opening up new markets for your business.

What are the most valuable conversational AI use cases for ecommerce?

Certain moments in the shopping experience create the biggest opportunities for conversational AI to drive results. Focus on these high-impact use cases first.

Pre-purchase questions are your biggest conversion opportunity. When someone is looking at a product but hasn't bought yet, quick answers about sizing, materials, or compatibility can close the sale. The AI can also suggest complementary products or highlight features the customer might have missed.

Order tracking makes up the largest volume of support tickets for most ecommerce brands. Customers want to know where their package is, when it will arrive, and what to do if there's a delay. AI handles these WISMO requests instantly by pulling real-time tracking information.

Returns and exchanges can be complex, but AI excels at the initial screening. It can check if an item is eligible for return, explain your policy, and start the return process. For straightforward returns, customers never need to wait for human help.

Cart recovery works best when it's immediate and personal. AI can detect when someone abandons their cart and reach out through chat or email with personalized messages, discount offers, or answers to common concerns that prevent purchases.

Post-purchase support keeps customers happy after they buy. The AI can send order confirmations, provide care instructions, suggest related products, and handle simple issues like address changes.

How do you implement conversational AI in an ecommerce tech stack?

Getting started with conversational AI doesn't require a complete overhaul of your systems. The key is starting with clear goals and building your capabilities over time.

Step 1: Define goals and KPIs for automation

The best automation opportunities are found in your tickets. Look for questions that come up repeatedly and have straightforward answers. Common examples include order status, return policies, and basic product information.

Set realistic goals for your first phase. You might aim to automate 30% of your tickets or reduce average response time by half. Track metrics like:

  • Automation rate: Percentage of tickets resolved without human intervention
  • Customer satisfaction: How happy customers are with AI interactions
  • Revenue impact: Sales influenced by AI recommendations or cart recovery

Step 2: Choose an ecommerce-ready AI platform

Not all conversational AI platforms understand ecommerce needs. Look for a platform that integrates directly with Shopify and your other business tools. This connection is essential for pulling real-time order data, customer history, and product information.

Your platform should come with pre-built actions for common ecommerce tasks like order lookups, return processing, and subscription management. This saves months of custom development work.

Make sure you can control the AI's behavior through clear guidance and rules. You need to be able to set your brand voice, define when to escalate to humans, and update the AI's knowledge as your business changes.

Step 3: Connect Shopify and key tools, then iterate

Start your implementation by connecting your Shopify store to give the AI access to order and customer data. Don’t forget to integrate the rest of your tech stack like shipping software, returns platforms, and loyalty programs.

Launch with a few core use cases like order tracking and basic product questions. Monitor the AI's performance closely and gather feedback from both customers and your support team. Use this data to refine the AI's responses and gradually expand its capabilities. 

The best approach is iterative — start small, learn what works, and build from there.

What are the challenges and risks of conversational AI?

While conversational AI offers significant benefits, you need to be aware of potential challenges and plan for them from the start.

Accuracy concerns arise when AI systems provide incorrect information or "hallucinate" facts that aren't true. Prevent this by using platforms that ground responses in your verified knowledge base and product data rather than generating answers from scratch.

Brand voice consistency becomes critical when AI represents your brand to customers. Set clear guidelines for tone, style, and messaging. Test the AI's responses regularly to ensure they align with how your human team would handle similar situations.

Data privacy requires careful attention since conversational AI handles sensitive customer information. Choose platforms with strong security measures, data encryption, and compliance with regulations like GDPR. Look for features like automatic removal of personal information from conversation logs.

Over-automation can frustrate customers when complex issues require human empathy and problem-solving. Design clear escalation paths so customers can easily reach human agents when needed. Train your AI to recognize when a situation is beyond its capabilities.

Integration complexity can slow down implementation if your chosen platform doesn't work well with your existing tools. This is why choosing an ecommerce-focused platform with pre-built integrations is so important.

Turn conversations into revenue with conversational AI

The brands winning with conversational AI start with clear goals, choose the right platform, and iterate based on real performance data. They don't try to automate everything at once. They focus on high-impact use cases that deliver real results.

Ready to see how conversational AI can transform your ecommerce support and sales? Book a demo with Gorgias — built specifically for ecommerce brands.

{{lead-magnet-2}}

min read.
LLM-Friendly Help Center

How to Make Your Help Center LLM-Friendly

Your Help Center doesn’t need a rebuild. It just needs a smarter structure so AI can find what customers ask about most.
By Holly Stanley
0 min read . By Holly Stanley

TL;DR:

  • You don’t need to rebuild your Help Center to make it work with AI—you just need to structure it smarter.
  • AI Agent reads your content in three layers: Help Center, Guidance, and Actions, following an “if / when / then” logic to find and share accurate answers.
  • Most AI escalations happen because Help Docs are vague or incomplete. Start by improving your top 10 ticket topics—like order status, returns, and refunds.
  • Make your articles scannable, define clear conditions, link next steps, and keep your tone consistent. These small tweaks help AI Agent resolve more tickets on its own—and free up your team to focus on what matters most.

As holiday season support volumes spike and teams lean on AI to keep up, one frustration keeps surfacing, our Help Center has the answers—so why can’t AI find them?

The truth is, AI can’t help customers if it can’t understand your Help Center. Most large language models (LLMs), including Gorgias AI Agent, don’t ignore your existing docs, they just struggle to find clear, structured answers inside them.

The good news is you don’t need to rebuild your Help Center or overhaul your content. You simply need to format it in a way that’s easy for both people and AI to read.

We’ll break down how AI Agent reads your Help Center, finds answers, and why small formatting changes can help it respond faster and more accurately, so your team spends less time on escalations.

{{lead-magnet-1}}

How AI Agent uses your Help Center content

Before you start rewriting your Help Center, it helps to understand how AI Agent actually reads and uses it.

Think of it like a three-step process that mirrors how a trained support rep thinks through a ticket.

1. Read Help Center docs

Your Help Center is AI Agent’s brain. AI Agent uses your Help Center to pull facts, policies, and instructions it needs to respond to customers accurately. If your articles are clearly structured and easy to scan, AI Agent can find what it needs fast. If not, it hesitates or escalates.

2. Follow Guidance instructions

Think of Guidance as AI Agent’s decision layer. What should AI Agent do when someone asks for a refund? What about when they ask for a discount? Guidance helps AI Agent provide accurate answers or hand over to a human by following an “if/when/then” framework.

3. Respond and perform

Finally, AI Agent uses a combination of your help docs and Guidance to respond to customers, and if enabled, perform an Action on their behalf—whether that’s changing a shipping address or canceling an order altogether.

Here’s what that looks like in practice:

Email thread between AI Agent and customer about skipping a subscription.
AI Agent skipped a customer’s subscription after getting their confirmation.

This structure removes guesswork for both your AI and your customers. The clearer your docs are about when something applies and what happens next, the more accurate and human your automated responses will feel.

A Help Center written for both people and AI Agent:

  • Saves your team time
  • Reduces escalations
  • Helps every customer get the right answer the first time

What causes AI Agent to escalate tickets, and how to fix it

Our data shows that most AI escalations happen for a simple reason––your Help Center doesn’t clearly answer the question your customer is asking.

That’s not a failure of AI. It’s a content issue. When articles are vague, outdated, or missing key details, AI Agent can’t confidently respond, so it passes the ticket to a human.

Here are the top 10 topics that trigger escalations most often:

Rank

Ticket Topic

% of Escalations

1

Order status

12.4%

2

Return request

7.9%

3

Order cancellation

6.1%

4

Product - quality issues

5.9%

5

Missing item

4.6%

6

Subscription cancellation

4.4%

7

Order refund

4.1%

8

Product details

3.5%

9

Return status

3.3%

10

Order delivered but not received

3.1%

Each of these topics needs a dedicated, clearly structured Help Doc that uses keywords customers are likely to search and spells out specific conditions. 

Here’s how to strengthen each one:

  • Order status: Include expected delivery timelines, tracking link FAQs, and a clear section for “what to do if tracking isn’t updating.”
  • Return request: Spell out eligibility requirements, time limits, and how to print or request a return label.
  • Order cancellation: Define cut-off times for canceling and link to your “returns” doc for shipped orders.
  • Product quality issues: Explain what qualifies as a defect, how to submit photos, and whether replacements or refunds apply.
  • Missing item: Clarify how to report missing items and what verification steps your team takes before reshipping.
  • Subscription cancellation: Add “if/then” logic for different cases: if paused vs. canceled, if prepaid vs. monthly.
  • Order refund: Outline refund timelines, where customers can see status updates, and any exceptions (e.g., partial refunds).
  • Product details: Cover sizing, materials, compatibility, or FAQs that drive most product-related questions.
  • Return status: State how long returns take to process and where to check progress once a label is scanned.
  • Order delivered but not received: Provide step-by-step guidance for checking with neighbors, filing claims, or requesting replacements.

Start by improving these 10 articles first. Together, they account for nearly half of all AI Agent escalations. The clearer your Help Center is on these topics, the fewer tickets your team will ever see, and the faster your AI will resolve the rest.

How to format your Help Center docs for LLMs

Once you know how AI Agent reads your content, the next step is formatting your help docs so it can easily understand and use them. 

The goal isn’t to rewrite everything, it’s to make your articles more structured, scannable, and logic-friendly. 

Here’s how.

1. Use structured, scannable sections

Both humans and large language models read hierarchically. If your article runs together in one long block of text, key answers get buried.

Break articles into clear sections and subheadings (H2s, H3s) for each scenario or condition. Use short paragraphs, bullets, and numbered lists to keep things readable.

Example:

How to Track Your Order

  • Step 1: Find your tracking number in your confirmation email.
  • Step 2: Click the tracking link to see your delivery status.
  • Step 3: If tracking hasn’t updated in 3 days, contact support.

A structured layout helps both AI and shoppers find the right step faster, without confusion or escalation.

2. Write for “if/when/then” logic

AI Agent learns best when your Help Docs clearly define what happens under specific conditions. Think of it like writing directions for a flowchart.

Example:

  • “If your order hasn’t arrived within 10 days, contact support for a replacement.”
  • “If your order has shipped, you can find the tracking link in your order confirmation email.”

This logic helps AI know what to do and how to explain the answer clearly to the customer.

3. Clarify similar terms and synonyms

Customers don’t always use the same words you do, and neither do LLMs. If your docs treat “cancel,” “stop,” and “pause” as interchangeable, AI Agent might return the wrong answer.

Define each term clearly in your Help Center and add small keyword variations (“cancel subscription,” “end plan,” “pause delivery”) so the AI can recognize related requests.

4. Link to next steps

AI Agent follows links just like a human agent. If your doc ends abruptly, it can’t guide the customer any further.

Always finish articles with an explicit next step, like linking to:

  • A form
  • Another article
  • A support action page

Example: “If your return meets our policy, request your return label here.”

That extra step keeps the conversation moving and prevents unnecessary escalations.

5. Keep tone consistent

AI tools prioritize structure and wording when learning from your Help Center—not emotional tone. 

Phrases like “Don’t worry!” or “We’ve got you!” add noise without clarity.

Instead, use simple, action-driven sentences that tell the customer exactly what to do:

  • “Click here to request a refund.”
  • “Fill out the warranty form to get a replacement.”

A consistent tone keeps your Help Center professional, helps AI deliver reliable responses, and creates a smoother experience for customers.

LLM-friendly Help Centers in action

You don’t need hundreds of articles or complex workflows to make your Help Center AI-ready. But you do need clarity, structure, and consistency. These Gorgias customers show how it’s done.

Little Words Project: Simple formatting that boosts instant answers

Little Words Project keeps things refreshingly straightforward. Their Help Center uses short paragraphs, descriptive headers, and tightly scoped articles that focus on a single intent, like returns, shipping, or product care. 

That makes it easy for AI Agent to scan the page, pull out the right facts, and return accurate answers on the first try.

Their tone stays friendly and on-brand, but the structure is what shines. Every article flows from question → answer → next step. It’s a minimalist approach, and it works. Both for customers and the AI reading alongside them.

Little Words Project Help Center homepage showing six main categories: Orders, Customization, Charms, Shipping, Warranty, and Returns & Exchanges.
Little Words Project's Help Center uses short paragraphs and tightly scoped articles to boost instant answers.

Dr. Bronner’s: Making tools work for the team

Customer education is at the heart of Dr. Bronner’s mission. Their customers often ask detailed questions about product ingredients, packaging, and certifications. With Gorgias, Emily and her team were able to build a robust Help Center that helped to proactively give this information.

The Help Center doesn't just provide information. The integration of interactive Flows, Order Management, and a Contact Form automation allowed Dr. Bronner’s to handle routine inquiries—such as order statuses—quickly and efficiently. These kinds of interactive elements are all possible out-of-the-box, no IT support needed.

Dr. Bronner's Help Center webpage showing detailed articles, interactive flows, and order management automation for efficient customer support.
The robust, proactively educational Help Center, integrated with interactive flows and order management via Gorgias, streamlines detailed and routine customer inquiries.

Read more: How Dr. Bronner's saved $100k/year by switching from Salesforce, then automated 50% of interactions with Gorgias 

Ekster: Building efficiency through automation and clarity

Ekster website and a Gorgias chat widget. A customer asks "How do I attach my AirTag?" and the Support Bot instantly replies with a link to the relevant "User Manual" article.
Gorgias AI Agent instantly recommends a relevant "User Manual" article to a customer asking, "How do I attach my AirTag?", demonstrating how structured Help Center content enables quick, instant issue resolution.

When Ekster switched to Gorgias, the team wanted to make their Help Center work smarter. By writing clear, structured articles for common questions like order tracking, returns, and product details, they gave both customers and AI Agent the information needed to resolve issues instantly.

"Our previous Help Center solution was the worst. I hated it. Then I saw Gorgias’s Help Center features, and how the Article Recommendations could answer shoppers’ questions instantly, and I loved it. I thought: this is just what we need." —Shauna Cleary, Head of Ecommerce at Ekster

The results followed fast. With well-organized Help Center content and automation built around it, Ekster was able to scale support without expanding the team.

“With all the automations we’ve set up in Gorgias, and because our team in Buenos Aires has ramped up, we didn’t have to rehire any extra agents.” —Shauna Cleary, Head of Ecommerce at Ekster

Learn more: How Ekster used automation to cover the workload of 4 agents 

Rowan: Clean structure that keeps customers (and AI) on track

Rowan’s Help Center is a great example of how clear structure can do the heavy lifting. Their FAQs are grouped into simple categories like piercing, shipping, returns, and aftercare, so readers and AI Agent can jump straight to the right topic without digging. 

For LLMs, that kind of consistency reduces guesswork. For customers, it creates a smooth, reassuring self-service experience. 

Rowan's Help Center homepage, structured with six clear categories including Piercing Aftercare (19 articles), Returns & Exchanges, and Appointment Information.
Rowan’s Help Center uses a clean, categorized structure (Aftercare, Returns, Shipping) that lets customers and AI Agents jump straight to the right topic.

TUSHY: Balancing brand voice with automation

TUSHY proves you can maintain personality and structure. Their Help Center articles use clear headings, direct language, and brand-consistent tone. It makes it easy for AI Agent to give accurate, on-brand responses.

TUSHY bidet customer help center webpage showing categories: Toilet Fit, My Order, How to Use Your TUSHY, Attachments, Non-Electric and Electric Seats.
Explore articles covering Toilet Fit, My Order, How to Use Your TUSHY, and various Bidet Attachments, all structured for easy retrieval and use.
“Too often, a great interaction is diminished when a customer feels reduced to just another transaction. With AI, we let the tech handle the selling, unabashedly, if needed, so our future customers can ask anything, even the questions they might be too shy to bring up with a human. In the end, everybody wins!" —Ren Fuller-Wasserman, Senior Director of Customer Experience at TUSHY

Quick checklist to audit your Help Center for AI

Ready to put your Help Center to the test? Use this five-point checklist to make sure your content is easy for both customers and AI to navigate.

1. Are your articles scannable with clear headings?

Break up long text blocks and use descriptive headers (H2s, H3s) so readers and AI Agent can instantly find the right section.

2. Do you define conditions with “if/when/then” phrasing?

Spell out what happens in each scenario. This logic helps AI Agent decide the right next step without second-guessing.

3. Do you cover your top escalation topics?

Make sure your Help Center includes complete, structured articles for high-volume issues like order status, returns, and refunds.

4. Does each article end with a clear next step or link?

Close every piece with a call to action, like a form, related article, or support link, so neither AI nor customers hit a dead end.

5. Is your language simple, action-based, and consistent?

Use direct, predictable phrasing. Avoid filler like “Don’t worry!” and focus on steps customers can actually take.

By tweaking structure instead of your content, it’s easier to turn your Help Center into a self-service powerhouse for both customers and your AI Agent.

Make your Help Center work smarter

Your Help Center already holds the answers your customers need. Now it’s time to make sure AI can find them. A few small tweaks to structure and phrasing can turn your existing content into a powerful, AI-ready knowledge base.

If you’re not sure where to start, review your Help Center with your Gorgias rep or CX team. They can help you identify quick wins and show you how AI Agent pulls information from your articles.

Remember: AI Agent gets smarter with every structured doc you publish.

Ready to optimize your Help Center for faster, more accurate support? Book a demo today.

{{lead-magnet-2}}

min read.
Create powerful self-service resources
Capture support-generated revenue
Automate repetitive tasks

Further reading

Start SMS Support

Start providing SMS support today, with Gorgias

By Morgan Smith
4 min read.
0 min read . By Morgan Smith

SMS is a convenient way for customers to contact your brand and receive fast support. It’s no wonder it’s one of the top five channels that consumers expect to engage with brands, alongside email, voice, website, and in-person. 

Every Gorgias plan now includes two-way SMS at no additional cost, making it easy for your brand to start offering this conversational channel.

Why offer SMS support?

There are many reasons to offer customer service messaging, but here are the top four:

It’s fast and conversational

SMS is a conversational, real-time channel. The benefit of this is that customers tend to keep the conversation short and reply quickly to follow-up questions, meaning your agents can resolve the situation quickly, too. 

Customers can contact you while they’re “on the go” 

Most people keep their phone with them everywhere they go. With SMS, it’s easy for customers to start the conversation and follow-up as they move throughout their day, instead of feeling stuck to a chat conversation on their laptop. 

It’s natural for younger customers

Sending text messages feels like you’re texting a friend, even if it’s actually between customers and your brand. Younger clientele will feel natural using this support channel, and it can even help you build that friendly-feeling into your brand perception. 

It makes sending photos back and forth easy

Does your refund or return policy require photo evidence to kick off the process? If your customers ever need to send pictures of damaged items or wrong products, SMS is the perfect channel because they’re probably taking those photos on their phone anyway. 

Still not sure if SMS is a support channel your brand should prioritize? Try it for 2 weeks. Because SMS is included in every Gorgias plan, it’s easy to turn off if you decide it isn’t right. 

Recommended reading: Our list of 60+ fascinating customer service statistics.

How to add SMS to your helpdesk 

You’ll need two things to get started with Gorgias SMS. (Don’t worry, they’re both quick!) 

If you’re new here, get started on the  Gorgias helpdesk. It only takes a few minutes to create an account, and you can always book a call with our sales team if you have questions. 

The second is a Gorgias-owned phone number, meaning you either created it in Gorgias or ported it from your previous phone provider. You can do both of these actions in Settings > Phone Numbers

Note: SMS is currently only available for US, UK, and Canadian numbers. 

Once your phone number is ready in Gorgias, you can add the SMS integration to it. You can do this from Settings > Integrations > SMS

Once the integration is active, you’re ready to start replying to SMS conversations from your customers. 

To tell your customers they can now text your brand, we recommend adding “Text us,” plus your phone number, in some or all of these places: 

  • The footer of your website
  • The “Contact Us” page of your website
  • Your Gorgias Help Center
  • Transactional emails (order confirmation, return initiated, etc.)

4 automation Rules to help you get started 

Below are four top automation rules to take full advantage of SMS customer service. We also have a full guide on customer service messaging that includes templates and macros to upgrade your SMS support.

Auto-tag with “SMS”

SMS is an official channel in Gorgias, meaning you can see SMS-specific stats or create SMS-specific Views out of the box. There may be times when you also want to Tag tickets with “SMS” however, in which case you can do so with a Rule like this: 

Auto-assign to a real-time team

SMS is a fast, conversational channel, so you’ll want to assign these tickets to agents that can keep up with the pace. If you have a dedicated chat team, they’ll be naturals at answering questions via SMS, as well. Here’s a Rule that will automatically assign SMS tickets to a specific team. 

Auto-reply: Message received

When customers text your brand, they’ll expect a fast response. In order to buy your agents some time, we recommend sending an auto-response to let the customer know their message has been received and an agent will be with them shortly. This will also give them confidence that the text message did in fact go through, so they don’t follow-up right away. 

Auto-reply: Order status

Whenever you add a new communication channel for your customers, you should consider how you’ll respond to WISMO (“Where is my order?”) questions on it. With SMS, you’ll want to keep the length of your reply in mind so you’re not sending an insanely long text message back to customers. We recommend creating a Rule that can A) make sure the reply follows the best format for SMS and B) save your agents from having to answer these WISMO questions manually.

Next: Connect your SMS marketing apps for a seamless experience

Gorgias SMS empowers your brand to keep the conversation going on SMS, even when your customers are on the go.

We also integrate with SMS marketing apps, making it easier for agents to answer promotion replies from one workspace. They can work more efficiently while turning SMS questions into opportunities for better customer value. 

In the Gorgias App Store, you’ll find some of the top ecommerce integration partners like Klaviyo, Attentive, Postscript, and more. 

If your brand is using any of these apps to drive sales via SMS, we highly recommend integrating with Gorgias so your team can work more efficiently toward your revenue goals. When SMS marketing and SMS customer service work in tandem, they are far more powerful.

Want to see an example of a brand that successfully launched SMS customer support and effectively drove customers to use the new channel? Check out our playbook of Berkey Filters, an ecommerce merchant that did just that.

Ready to get started with this conversational support channel? Add SMS to your Gorgias helpdesk today or book a call with our team to learn more.

Continuous Deployment

Leveraging Automation on Our Path to Continuous Deployment and GitOps

By Vincent Gilles
9 min read.
0 min read . By Vincent Gilles

As we all locked down in March 2020 and changed our shopping habits, many brick-and-mortar retailers started their first online storefronts. 

Gorgias has benefitted from the resulting ecommerce growth over the past two years, and we have grown the team to accommodate these trends. From 30 employees at the start of 2020, we are now more than 200 on our journey to delivering better customer service.

Our engineering team contributed to much of this hiring, which created some challenges and growing pains. What worked at the beginning with our team of three did not hold up when the team grew to 20 people. And the systems that scaled the team to 20 needed updates to support a team of 50. To continue to grow, we needed to build something more sustainable.

Continuous deployment — and the changes required to support it — presented a major opportunity for reaching toward the scale we aspired to. In this article I’ll explore how we automated and streamlined our process to make our developers’ lives easier and empower faster iteration.

Scaling our deployment process alongside organizational growth

Throughout the last two years of accelerated growth, we’ve identified a few things that we could do to better support our team expansion. 

Before optimizing the feature release process, here’s how things went for our earlier, smaller team when deploying new additions:

  1. Open a pull request (PR) on GitHub, which would run our tests in our continuous integration (CI) system
  2. Merge those changes into the main branch, once the changes are approved
  3. Automatically deploy the new commit in the staging/testing environment, after tests run and pass on the main branch
  4. Deploy these changes in our production environment, assuming all goes well up until this point
  5. Post on the dedicated Slack channel to inform the team of the new feature, specifying the project deployed and attaching a screenshot of all commits since the last deployment.
  6. Watch dashboards for any changes — as a failsafe to back up the alerts that were already triggering — to check if the change needed to be rolled back.

This wasn’t perfect, but it was an effective solution for a small team. However, the accelerated growth in the engineering team led to a sharp increase in the number of projects and also collaborators on each project. We began to notice several points of friction:

  • The process was slow and painful. The continuous integration and continuous deployment (CI/CD) systems are meant to speed the process up, but we still need to perform rigorous testing. We needed to find the sweet spot between speed and rigorous testing and we believed both aspects left room for improvement.
  • Developers didn’t always take full ownership of their changes. When a change wasn’t considered critical (which happened fairly often), a developer would often let the next developer with a critical change deploy multiple commits at the same time. When problems occurred, this made it much more difficult to diagnose the bad commit.
  • It was a challenge to track version changes. To track the version of a service that was deployed in production, you had to either check our Kubernetes clusters directly or go through the screenshots in our dedicated Slack channel.
  • Each project had its own set of scripts to help with deployment. We wanted to streamline our deployment process and add some consistency across all projects.

It was clear that things needed to change.

Adjusting practices and tools to lay the foundation for implementing GitOps

On the Site Reliability Engineering (SRE) team, we are fans of the GitOps approach, where Git is the single source of truth. So when the previously mentioned points of friction became more critical, we felt that all the tooling involved in GitOps practices could help us find practical solutions.

Additionally, these solutions would often rely on tooling we already had in place (like Kubernetes, or Helm for example).

What is GitOps?

GitOps is an operational framework. It takes application-development best practices and applies them to infrastructure automation. 

The main takeaway is that in a GitOps setting, everything from code to infrastructure configuration is versioned in Git. It is then possible to create automation by leveraging the workflows associated with Git. 

What are the benefits of implementation?

One such class of that automation could be “operations by pull requests”. In that case, pull requests and associated events could trigger various operations. 

Here are some examples:

  • Opening a pull request could build an application and deploy it to a preview environment
  • You could add a commit to said pull request to rebuild the application and update the container image’s version in the preview environment
  • By merging the pull request, you could trigger a workflow that would result in the new changes being deployed in a live production environment

Using ArgoCD as a building block

ArgoCD is a continuous deployment tool that relies on GitOps practices. It helps synchronize live environments and services to version-controlled declarative service definitions and configurations, which ArgoCD calls Applications. 

In simpler terms, an Application resource tells ArgoCD to look at a Git repository and to make sure the deployed service’s configuration matches the one stored in Git.

The goal wasn’t to reinvent the wheel when implementing continuous deployment. We instead wanted to approach it in a progressive manner. This would help build developer buy-in, lay the groundwork for a smoother transition, and reduce the risk of breaking deploys. ArgoCD was an excellent step toward those goals, given how flexible it is with customizable Config Management Plugins (CMP).

ArgoCD can track a branch to keep everything up to date with the last commit, but can also make sure a particular revision is used. We decided to use the latter approach as an intermediate step, because we weren’t quite ready to deploy off the HEAD of our repositories. 

The only difference from a pipeline perspective is that it now updates the tracked revision in ArgoCD instead of running our complex deployment scripts. ArgoCD has a Command Line Interface (CLI) that allows us to simply do that. Our deployment jobs only need to run the following command:

The developers’ workflow is left untouched at this point. Now comes the fun part.

Building automation into our process to move faster

Our biggest requirement for continuous deployment was to have some sort of safeguard in case things went wrong. No matter how much we trust our tests, it is always possible that a bug makes its way to our production environments.

Before implementing Argo Rollouts, we still kept an eye on the system to make sure everything was fine during deployment and took quick action when issues were discovered. But up to that point, this process was carried out manually. 

It was time to automate that process, toward the goal of raising our team’s confidence levels when deploying new changes. By providing a safety net, of sorts, we could be sure that things would go according to plan without manually checking it all.

Argo Rollouts can revert changes automatically, when issues arise

Argo Rollouts is a progressive delivery controller. It relies on a Kubernetes controller and set of custom resource definitions (CRD) to provide us with advanced deployment capabilities on top of the ones natively offered by Kubernetes. These include features like:

  • Blue/Green, which consists of deploying all the new instances of our application alongside the old version without sending traffic to it at first. We can then run some tests on the new version and flip the switch when we made sure everything was fine. Once no more traffic is sent to the old version, we can tear it down.
  • Canary deployments, which allow us to start by only deploying a small number of replicas, using the new version of our software. This way, we’re able to shift a small portion of traffic to the new version. We can do multiple steps here and only shift 1% of the traffic towards the new version at first. Then 10%, 50% or even more depending on what we try to achieve.
  • Analyzing new deployments’ performance. Argo Rollouts allows us to automate some checks as we are rolling out a new version of our software. To do that, we describe such checks in an AnalysisTemplate resource, which Argo Rollouts will use to query our metric provider and make sure everything is fine.
  • Experiments, which are another resource Argo Rollouts introduces to allow for short-lived experiments such as A/B testing.
  • Progressive delivery in Kubernetes clusters by managing the entire rollout process and allowing us to describe the desired steps of a rollout. It allows us to set a weight for a canary deployment (the ratio between pods running the new and the old versions), perform an analysis, or even pause a deployment for a given amount of time or until manual validation.
Argo Rollouts dashboard view of our awesome-service rollout. On the left we can see the current version is stable and on the right we can see the different steps during the rollout process, top to bottom.

We were especially interested in the canary and canary analysis features. By shifting only a small portion of traffic to the new version of an application, we can limit the blast radius in case anything is wrong. Performing an analysis allows us to automatically, and periodically, check that our service’s new version is behaving as expected before promoting this canary. 

Argo Rollouts is compatible with multiple metric providers including Datadog, which is the tool we use. This allows us to run a Datadog query (or multiple) every few minutes and compare the results with a threshold value we specify. 

We can then configure Argo Rollouts to automatically take action, should the threshold(s) be exceeded too often during the analysis. In those cases, Argo Rollouts scales down the canary and scales the previous stable version of our software back to its initial number of replicas.

Argo rollouts in action, stopping a bad deploy that would have certainly caused a large problem

Each service has its own metrics to monitor, but for starters we added an error rate check for all of our services.

Creating a deployment conductor to simplify configuration and deployment management

Remember when I mentioned replacing complex, project-specific deployment scripts with a single, simple command? That’s not entirely accurate, and requires some additional nuance for a full understanding.

Not only did we need to deploy software on different kinds of environments (staging and production), but also in multiple Kubernetes clusters per environment. For example, the applications composing the Gorgias core platform are deployed across multiple cloud regions all around the world.

ArgoCD and Argo Rollouts might seem to be magic tools, we actually still need some “glue” to make things stick together. Now because of ArgoCD’s application-based mechanisms, we were able to get rid of custom scripts and use this common tool across all projects. This in-house tool was named deployment conductor.

We even went a step further and implemented this tool in a way that accepts simple YAML configuration files. Such files allow us to declare various environments and clusters in which we want each individual project to be deployed.

When deploying a service to an environment, our tool will then go through all clusters listed for that environment.

For each of these, it will look for dedicated values.yaml files in the service’s chart’s directory. This allows developers to change a service’s configuration based on the environment and cluster in which it’s deployed. Typically, they would want to edit the number of replicas for each service depending on the geographical region.

This makes it much easier for developers than having to manage configuration and maintain deployment scripts.

Enabling continuous deployment

This leads us to the end of our journey’s first leg: our first encounter with continuous deployment.

After we migrated all our Kubernetes Deployments to Argo Rollouts, we let our developers get acclimated for the next few weeks. 

Our new setup still wasn’t fully optimized, but we felt like it was a big improvement compared to the previous one. And while we could think of many improvements to make things even more reliable before enabling continuous deployment, we decided to get feedback from the team during this period, to iterate more effectively.

Some projects introduced additional technicalities to overcome, but we easily identified a small first batch of projects where we could enable CD. Before deployment, we asked the development team if we were missing anything they needed to be comfortable with automatic deployment of their code in production environments. 

With everyone feeling good about where we were at, we removed the manual step in our CI system (GitLab) for jobs deploying to production environments.

Next steps on the path to continuous deployment

We’re still monitoring this closely, but so far we haven’t had any issues. We still plan on enabling continuous deployment on all our projects in the near future, but it will be a work in progress for now.

Here are some ideas for future improvements that anticipate potential roadblocks:

  • Some projects still require additional safeguards before continuous deployment. Automating database migrations is one of our biggest challenges. Helm pre-upgrade hooks would allow us to check if a migration is necessary before updating an application and run it when appropriate. But when automating these database migrations, the tricky part is avoiding heavy locks on critical tables.
  • It still isn’t that easy to track what version of a service is currently deployed. When things go according to plan, the last commit in the main branch should either be deployed or currently deploying. To solve this, we could go a step further and version the state of each application for each cluster, including the version identifier for the version that should be deployed. We’re also monitoring the Argo image updater repository closely. When a stable version is released, it could help us detect new available versions for services, deploy them, and update the configuration in Git automatically.
  • When there are multiple clusters per environment with the same services deployed, we end up with too many ArgoCD applications. One thing we could do is use the “app of apps” pattern and manage a single application to create all the other required applications for a given service.
  • On the bigger projects, the volume of activity may require the queuing of deployments. In fact, if two people merge changes in the main branch around the same time, there could be issues. The last thing we want is for the last commit to be deployed and then replaced by the commit preceding it.

We’re excited to explore these challenges. And, overall, our developers have welcomed these changes with open arms. It helps that our systems have been successful at stopping bad deployments from creating big incidents so far. 

While we haven’t reached the end of our journey yet, we are confident that we are on the right path, moving at the right pace for our team.

Prevent Idle In Transaction

Avoiding idle-in-transaction connection states with SQLAlchemy

By Gorgias Engineering
10 min read.
0 min read . By Gorgias Engineering

As you work with SQLAlchemy, over time, you might have a performance nightmare brewing in the background that you aren’t even aware of.

In this lesser-known issue, which strikes primarily in larger projects, normal usage leads to an ever-growing number of idle-in-transaction database connections. These open connections can kill the overall performance of the application.

While you can fix this issue down the line, when it begins to take a toll on your performance, it takes much less work to mitigate the problem from the start.

At Gorgias, we learned this lesson the hard way. After testing different approaches, we solved the problem by extending the high-level SQLAlchemy classes (namely sessions and transactions) with functionality that allows working with "live" DB (database) objects for limited periods of time, expunging them after they are no longer needed.

This analysis covers everything you need to know to close those unnecessary open DB connections and keep your application humming along.

The problem: your database connection states are monopolizing unnecessary resources

Leading Python web frameworks such as Django come with an integrated ORM (object-relational mapping) that handles all database access, separating most of the low-level database concerns from the actual user code. The developer can write their code focusing on the actual logic around models, rather than thinking of the DB engine, transaction management or isolation level.

While this scenario seems enticing, big frameworks like Django may not always be suitable for our projects. What happens if we want to build our own starting from a microframework (instead of a full-stack framework) and augment it only with the components that we need?

In Python, the extra packages we would use to build ourselves a full-fledged framework are fairly standard: They will most likely include Jinja2 for template rendering, Marshmallow for dealing with schemas and SQLAlchemy as ORM.

Request-response paradigm vs. background tasks

Not all projects are web applications (following a request-response pattern) and among web applications, most of them deal with background tasks that have nothing to do with requests or responses.

This is important to understand because in request-response paradigms, we usually open a DB transaction upon receiving a request and we close it when responding to it. This allows us to associate the number of concurrent DB transactions with the number of parallel HTTP requests handled. A transaction stays open for as long as a request is being processed, and that must happen relatively quickly — users don't appreciate long loading times.

Transactions opened and closed by background tasks are a totally different story: There's no clear and simple rule on how DB transactions are managed at a code level, there's no easy way to tell how long tasks (should) last, and there usually isn't any upper limit to the execution time.

This could lead to potentially long transaction times, during which the process effectively holds a DB connection open without actually using it for the majority of the time period. This state is known as an idle-in-transaction connection state and should be avoided as much as possible, because it blocks DB resources without actively using them.

The limitations of SQLAlchemy with PEP-249

To fully understand how database access transpires in a SQLAlchemy-based app, one needs to understand the layers responsible for the execution.

Layers of execution in an SQLAlchemy app

At the highest level, we code our DB interaction using high-level SQLAlchemy queries on our defined models. The query is then transformed into one or more SQL statements by SQLAlchemy's ORM which is passed on to a database engine (driver) through a common Python DB API defined by PEP-249. (PEP-249 is a Python Enhancement Proposal dedicated to standardizing Python DB server access.) The database engine communicates with the actual database server.

At first glance, everything looks good in this stack. However there's one tiny problem: The DB API (defined by PEP-249) does not provide an explicit way of managing transactions. In fact, it mandates the use of a default transaction regardless of the operations you're executing, so even the simplest select will open a transaction if none are open on the current connection.

SQLAlchemy builds on top of PEP-249, doing its best to stay out of driver implementation details. That way, any Python DB driver claiming PEP-249 compatibility could work well with it.

While this is generally a good idea, SQLAlchemy has no choice but to inherit the limitations and design choices made at the PEP-249 level. More precisely (and importantly), it will automatically open a transaction for you upon the very first query, regardless whether it’s needed. And that's the root of the issue we set out to solve: In production, you'll probably end up with a lot of unwanted transactions, locking up on DB resources for longer than desired.

Also, SQLAlchemy uses sessions (in-memory caches of models) that rely on transactions. And the whole SQLAlchemy world is built around sessions. While you could technically ditch them to avoid the idle-in-transactions problem with a “lower-level” interface to the DB, all of the examples and documentation you’ll find online uses the “higher-level” interface (i.e. sessions). It’s likely that you will feel like you are trying to swim against the tide to get that workaround up and running.

Postgres and the different types of autocommits

Some DB servers, most notably Postgres, default to an autocommit mode. This mode implies atomicity at the SQL statement level — something developers are likely to expect. But they prefer to explicitly open a transaction block when needed and operate outside of one by default.

If you're reading this, you have probably already Googled for "sqlalchemy autocommit" and may have found their official documentation on the (now deprecated) autocommit mode. Unfortunately this functionality is a "soft" autocommit and is implemented purely in SQLAlchemy, on top of the PEP-249 driver; it doesn't have anything to do with DB's native autocommit mode.

This version works by simply committing the opened transaction as soon as SQLAlchemy detects an SQL statement that modifies data. Unfortunately, that doesn't fix our problem; the pointless, underlying DB transaction opened by non-modifying queries still remains open.

When using Postgres, we could in theory play with the new AUTOCOMMIT isolation level option introduced in psycopg2 to make use of the DB-level autocommit mode. However this is far from ideal as it would require hooking into SQLAlchemy's transaction management and adjusting the isolation level each time as needed. Additionally, "autocommit" isn't really an isolation level and it’s not desirable to change the connection's isolation level all the time, from various parts of the code. You can find more details on this matter, along with a possible implementation of this idea in Carl Meyer's article “PostgreSQL Transactions and SQLAlchemy.”

At Gorgias, we always prefer explicit solutions to implicit assumptions. By including all details, even common ones that most developers would assume by default, we can be more clear and leave less guesswork later on. This is why we didn't want to hack together a solution behind the scenes, just to get rid of our idle-in-transactions problem. We decided to dig deeper and come up with a proper, explicit, and (almost) hack-free method to fix it.

Visualizing an idle-in-transaction case

The following chart shows the profile of an idle-in-transaction case over a period of two weeks, before and after fixing the problem.

Visualizing idle-in-transaction, before and after

As you can see, we’re talking about tens of seconds during which connections are being held in an unusable state. In the context of a user waiting for a page to load, that is an excruciatingly long period of time.

The solution: expunged objects and frozen models

Expunging objects to prevent long-lasting idle connections

SQLAlchemy works with sessions that are, simply put, in-memory caches of model instances. The code behind these sessions is quite complex, but usage boils down to either explicit session reference...

...or implicit usage.

Both of these approaches will ensure a transaction is opened and will not close it until a later ***session.commit()***or session.rollback(). There's actually nothing wrong with calling session.commit() when you need to explicitly close a transaction that you know is opened and you’re done with using the DB, in that particular scope.

To address the idle-in-transaction problem generated by such a line, we must keep the code between the query and the commit relatively short and fast (i.e. avoid blocking calls or CPU-intensive operations).

It sounds simple enough, but what happens if we access an attribute of a DB model after session.commit()? It will open another transaction and leave it hanging, even though it might not need to hit the DB at all.

While we can't foresee what a developer will do with the DB object afterward, we can prevent usage that would hit the DB (and open a new transaction) by expunging it from the session. An expunged object will raise an exception if any unloaded (or expired) attributes are accessed. And that’s what we actually want here: to make it crash if misused, rather than leaving idle-in-transaction connections behind to block DB resources.

Building an expunging context manager to handle transactions and connections

When working with multiple objects and complex queries, it’s easy to overlook the necessary expunging of those objects. It only takes one un-expunged object to trigger the idle-in-transaction problem, so you need to be consistent.

Objects can't be used for any kind of DB interaction after being expunged. So how do we make it clear and obvious that certain objects are to be used in within a limited scope? The answer is a Python context manager to handle SQLAlchemy transactions and connections. Not only does it allow us to visually limit object usage to a block, but it will also ensure everything is prepared for us and cleaned up afterwards.

The construct above normally opens a transaction block associated to a new SQLAlchemy session, but we've added a new expunge keyword to the begin method, instructing SQLAlchemy to automatically expunge objects associated with block's session (the tx.session). To get this kind of behavior from a session, we need to override the begin method (and friends) in a subclass of SQLAlchemy's Session.

We want to keep the default behavior and use a new ExpungingTransaction instead of SQLAlchemy's SessionTransaction, but only when explicitly instructed to by the expunge=True argument.

You can use the class_ argument of sessionmaker to instruct it to build am ExpungingSession instead of a regular Session.

The last piece of the puzzle is the ExpungingTransaction code, which is responsible for two important things: committing the session so the underlying transaction gets closed and expunging objects so that we don't accidentally reopen the transaction.

By following these steps, you get a useful context manager that forces you to group your DB interaction into a block and notifies you if you mistakenly use (unloaded) objects outside of it.

Using frozen models to deal with expunged objects

What if we really need to access DB models outside of an expunging context?

Simply passing models to functions as arguments helps in achieving a great goal: the decoupling of models retrieval from their actual usage. However, such functions are no longer in control of what happens to those models afterwards

We don't want to forbid all usage of models outside of this context, but we need to somehow inform the user that the model object comes “as is,” with whatever loaded attributes it has. It's disconnected from the DB and shouldn't be modified.

In SQLAlchemy, when we modify a live model object, we expect the change to be pushed to the DB as soon as commit or flush is called on the owning session. With expunged objects this is not the case, because they don't belong to a session. So how does the user of such an object know what to expect from a certain model object? The user needs to ensure that she:

  • Doesn't access an unloaded attribute of a live DB object, as it may open an unwanted transaction
  • Doesn't modify attributes of an expunged object, as it won't be saved

To safely and explicitly pass along these kind of model objects, we introduced frozen objects. Frozen objects are basically proxies to expunged models that won't allow any modification.

To work with these frozen objects, we added a freeze method to our ExpungingSession:

So now our code would look something like this:

Now, what if we want to modify the object outside of this context, later on, (e.g. after a long-lasting HTTP request)? As our frozen object is completely disconnected from any session (and from the DB), we need to fetch a warm instance associated to it from the DB and make our changes to that instance. This is done by adding a helper fetch_warm_instance method to our session...

...and then our code that modifies the object would say something like this.

When the second context manager exits, it will call commit on tx.session, and changes to my_model will be committed to the DB right away.

Frozen Relationships

We now have a way of safely dealing with models without generating idle-in-transaction problems, but the code quickly becomes a mess if we have to deal with relationships: We need to freeze them separately and pass them along as if they aren’t related. This could be overcome by telling the freeze method to freeze all related objects, recursively walking the relationships.

We'll have to make some adjustments to our frozen proxy class as well.

Now, we can fetch, freeze, and use frozen objects with any preloaded relationships.

Additional recommendations and caveats

  • Don't call session.commit() inside an expunging context manager's block. In fact, avoid using session at all and use tx.session instead. The context manager will take care of flushing and committing the session when exited.
  • Avoid nested sessions inside the context block.
  • Try to use one single query inside a context manager. If you need multiple queries, it often makes sense to use separate context blocks for each one.
  • If you don't need to pass along an entire model object, you don't need to freeze it. Imagine that you only need an object's id or name attribute; you can simply store it in a variable while inside the expunging context block.

Avoid idle-in-transaction connection states to preserve DB resources

While the code to access the DB with SQLAlchemy may look simple and straightforward, one should always pay close attention to transaction management and the subtleties that arise from the various layers of the persistence stack.

We learned this the hard way, when our services eventually started to exhaust the DB resources many years into development.

If you recently decided to use a software stack similar to ours, you should consider writing your DB access code in such a way that it avoids idle-in-transaction issues, even from the first days of your project. The problem may not be obvious at the beginning, but it becomes painfully apparent as you scale.

If your project is mature and has been in development for years, you should  consider planning changes to your code to avoid or to minimize idle-in-transaction issues, while the situation is still under control. You can start writing new idle-in-transaction-proof code while planning to gradually update existing code, according to the capacity of your development team.

International SaaS Salary Calculator

How We Built an International SaaS Salary Calculator for Our Distributed Team

By Adeline Bodemer
5 min read.
0 min read . By Adeline Bodemer

Like any major topic in your company, your compensation policy should reflect your organizational values.

At Gorgias, we created a compensation calculator that reflected ours, setting salaries across the organization based on 3 key principles:

  1. Compensation should be based on data
  2. Compensation should reflect everyone’s ownership, meaning everyone should have equity
  3. Compensation should be transparent

Since the beginning, we applied the first two: Each of our employees was granted data-driven stock options that beat the market average.

However, we were challenged internally: Our team members asked how much they would make if they switched teams or if they got promoted.

This led to the implementation of our third key principle, as we shared the compensation calculator with everyone at Gorgias and beyond: See the calculator here.

This was not a small challenge. We’re sharing our process in hopes that we can help other companies arrive at equitable, transparent compensation practices.

We built our compensation calculator using four key indicators

First, let’s get back to how we built the tool. We had to decide which criteria we wanted to take into account. Based on research articles and benchmarks on what other companies did before, we decided that our compensation model would be based on 4 factors: position, level, location, and strategic orientation.

If we had to sum it up all briefly, our formula looks like this:

Average of Data (for the position at defined percentile & Level) x Location index

Salaries are based on four criteria: position, level, location, and strategic orientation.

Position

This is the job title someone has in the company. It looks simple, but it can be challenging to define! Even if the titles don’t really vary from one company to another, people might have different duties, deal with much bigger clients or have more technical responsibilities. Sometimes your job title or position doesn’t match the existing databases.

For some of these roles, when we thought that our team members were doing more than average in the market, we crossed some databases to get something closer to fairness.

Level

To assess a level we defined specific criteria in our growth plan for each job position. It is, of course, linked to seniority, but that is not the primary factor. When we hire someone, we evaluate their skills using specific challenges and case studies during our interview processes.

Depending on the databases you’ll find beginner, intermediate, expert, which we represent as   L1, L2, L3, etc.We decided to go with six levels from L1 to L6 for individual contributors and six levels in management from team lead to C-level executive.

Location

Our location index is based on the cost of living in a specific city (we rely on Numbeo for instance) and on the average salary for a position we hire (we use Glassdoor). Some cities are better providers of specific talents. By combining them, we get a more accurate location index.

When we are missing data for a specific city, we use the nearest one where we have data available.

Our reference is San Francisco, where the location index equals 1, meaning it’s basically the most expensive city in terms of hiring. For others, we have an index that can vary from 0.29 (Belgrade, Serbia) to 0.56 (Paris, France) to 0.65 (Toronto, Canada) etc. We now have 50+ locations in our salary calculator — a necessary consideration for our quickly growing, global team of full-time employees and contractors.

Strategic orientation

We rely on our strategic orientation to select which percentile we want to use in our databases. When we started Gorgias we were using the 50th percentile. As we grew (and raised funds), we wanted to be 100% sure that we were hiring the best people to build the best possible company.  

High quality talent can be expensive (but not as expensive as making the wrong hires)! Obviously, we can’t pay everyone at the top of the market and align with big players like Google, but we can do our best to get close.

Since having the best product is our priority we pay our engineering and product team at the 90th percentile, meaning their pay is in the top 10% of the industry. We pay other teams at the 60th percentile.

Some other companies take into account additional criteria, such as company seniority. We believe seniority should reflect in equity, rather than in salary. If you apply seniority in the company index on salaries, eventually some of your team members will be inconsistent with the market. Those employees may stay in your company only because they won’t be able to find the same salary elsewhere.

By crossing several databases, we arrived at a more accurate dataset

Data is at the heart of our company DNA.

Where should you find your data? Data is everywhere! What matters most is the quality.

We look for the most relevant data on the market. If the database is not robust enough, we look elsewhere. So far we have managed to rely on several of them: Opencomp, Optionimpact, Figures.hr, and Pave are some major datasets we use for compensation. We’re curious and always looking for more. We’ll soon dig into Carta, Eon, and Levels. The more data we get, the more confident we are about the offers we make to our team.

Once we have the data, we apply our location index. It applies to both salaries and equity.

To build our equity package, we use the compensation and we then apply a “team” multiplier and a “level” multiplier. Those multipliers rely on data, of course. We’re using the same databases mentioned above and also on Rewarding Talent documentation for Europe.

Internal communication is key

As we mentioned above, once our tool was robust enough, we shared it internally.

To be honest, checking and checking again took longer than expected. But we all agreed that we’d rather release it to good reactions than rush it and create fear. We postponed the release for one month to check and double-check the results..

For the most effective release, we decided to do two things:

  1. We shared it with one team at a time. This was done to anticipate the flow of questions, though we didn’t receive that many.
  2. We shared it with a lot of humility. . Even if we checked the data many times, we could have missed something, or there could have been something that lacked consistency. We asked everyone to stress-test it and to provide feedback.

Overall, the reactions have been great. People loved the transparency and we got solid feedback.

We released the new calculator in September 2021, and overall we’re really happy with the response. We also had positive feedback from the update this month.

Let’s see how it goes with time.

Next step: sharing it with the rest of the industry

Let’s be humble here: It’s only the beginning. It’s a Google Sheet. Of course, we’ll need to iterate on it. 

In the meantime, you can check out the calculator here.

So far we’ve made plans to review the whole grid every year. However, now that it’s public within the teams, we can collect feedback and potentially make some changes. Everyone can add comments as they notice potential issues.

The next step for us is to share it online with everyone, on our website, so that candidates can have a vision of what we offer. We hope we’ll attract more talent thanks to this level of transparency and the value of our compensation packages.

Engaging Employees in a Hybrid World

Gorgias’s Playbook for Engaging Employees in a Hybrid World

By Adeline Bodemer
6 min read.
0 min read . By Adeline Bodemer

I come from the world of physical retail where building a bond was more straightforward. We often celebrated wins with breakfast and champagne (yes, I’m French!) or by simply clapping our hands and making noise of joy.

We would also have lunch together every day, engaging in many informal discussions.

Of course, it bonded us! I knew my colleagues’ dog names and their plumber problems, and I felt really close to many of them.

Employee engagement is one of the primary drivers of productivity, work quality, and talent retention. When I joined Gorgias, where we have a globally distributed team, I wondered how you create the sense of belonging that drives that engagement

The ingredients for employee engagement

Like many companies now, our workforce is distributed. But at Gorgias, it’s a truly global affair: Our team lives in 17 countries, four continents, and many different time zones, which can be challenging.

And yet, I believe Gorgias culture is truly amazing and even better than the one I used to know.

I realize that we achieved that by relying on the critical ingredients of a strong relationship

  • Strong moments - Simply sharing coffee won’t take you very far in getting to know your colleagues. But creating some great moments together will bring you one step further.
  • Repetition - If you don’t nourish the relationship consistently, it may unravel with time. You won’t feel as connected as before.

By repeating these strong moments, you can make the connection between people stronger as well. The stronger the connection, the stronger the engagement.

Speaking of a strong engagement, Gorgias’s eNPS (employee Net Promoter Score) is 50. How is this possible? Well, what’s always quoted as one of our main strengths is the company culture, and how it connects our employees.

Let’s take it further by exploring five actionable steps we have taken to make that happen.

Organize virtual summits (quarterly) 

While some would push back against events like these falling under the purview of the People team, they are important for building strong culture, team cohesion, and employee happiness — all areas that are definitely part of our directive.

Here’s what you need to know to bring these summits to your organization.

What is the virtual summit?

As the name states, it’s a virtual event where the whole company connects.

It’s not mandatory, but it is highly recommended to attend because it’s fun and you learn many things.

It’s a mix of company updates, fun moments, and inspiring sessions. Each session is short, to let everyone the opportunity to breathe.

Typically we have three kinds of sessions:

  • Company updates range from intro sessions with the CEO and team lead presentations, to founder Q&As. During these sessions we have a short retro on the quarter to share strategic vision, which also provides an opportunity for the whole team to challenge the company leadership.
  • Fun moments include activities like scavenger hunts, quizzes, online escape games, and online musical activities.
  • Inspiring sessions covered topics including the benefits of a morning routine, and recruiting tips . These sessions help us to learn and grow, a top priority for our teammates.

Due to timezones, some sessions don’t include every country.

What are the key elements to make it work?

  • Teamwork: Pretty obvious, right? But a great team is key to making the virtual summit a success. Identify who can be the owner of this whole event. In our case, it was someone from the People team, our Office and Happiness manager.

  • Delegation: Get help from other teams to build the summit content. Having your team building that all alone would be overwhelming. Delegate! The customer success team can help you build the quiz: “How well do you know our customers?” for instance. The recruiting team can share how to be a good recruiter. And external vendors can help with specific games — we used virtual event contractors for the ones that would’ve been too cumbersome to build.

  • Tools: Look for a solid platform to rely on. We used to rely on Google Meet, but since we have a growing number of employees, we use Bevy to cater to our virtual event needs.

  • Content: A nice video at the beginning of the session as an ice breaker is always a good idea, and plus, it sets up the mood. The same goes for engaging slides. Even though we rarely use slide decks, dynamic slides are more effective than boring written docs for engaging 200 people for half-hour blocks. We share slides to present the company updates and the learning sessions.

  • Anticipation: I think we can all agree that last-minute organization doesn’t work. The more you anticipate, the less stressful it will be. And the bigger your company is, the more things you need to anticipate.

How much does it cost?

Our last virtual summit cost us roughly $13,000, which means $65 per head. Here’s the breakdown:

  • Content: $4,000
  • Speakers for learning sessions: $2,000
  • Games/animations: $5,000
  • Food: $2,000

What are the challenges?

The first thing you might already have in mind is: It takes time! And you’re right.

The more we grow, the more challenging it becomes to organize these events.

I believe we’ll eventually need to have a dedicated event manager for all of our physical and virtual events. I want to have them within my team, and I 100% believe it’s worth it.

Another challenge can be technical difficulties with your event software choice, so make sure that you find a reliable platform that suits your needs.

Allow in person gathering at the nearest hub (quarterly) 

Our team is a mix of hybrid and full-remote workers.

Since we don’t want the full-remote people to become disconnected, we highly encourage them to join the nearest hub once a quarter.

And when they do, we organize some happy hours, games or movie nights. Those face-to-face activities help create bonds between employees. It’s simple and doesn’t require a lot of organization, but it creates an incredible moment every time the remote teams join. We call them Gorgias Weeks.

Organize a company offsite (annually) Of course with the pandemic, that’s not an easy one.

We were fortunate to be able to organize our company offsite and gather a massive part of the crew together in October 2021. 

The pandemic created doubt and additional points of stress, but looking back I’m so glad we were able to create an opportunity for everyone to meet in person.

We asked everyone to bring a health pass — full vaccination or PCR test — and we picked a location that allowed for a lot of outdoor activities.

We made sure the agenda for the two days was not too busy. As with our virtual summit, it was a balance of company alignment, learning, and fun. We made sure people had enough free time to relax, talk to each other, play games, or play sports.

This company offsite is surely an essential and strong moment for us and it helps create strong bonds and great memories.

Encourage team offsites (annually)

We encourage every team to organize their own offsite for team-building purposes. Since people don’t meet a lot physically, having these once a year is great!

We let each team lead own it. They pick up the location and the agenda. Then, we provide guidelines with the budget.

Needless to say, it helps build stronger bonds and great memories.

Have informal fun moments (weekly) 

In my experience, it was quite tough to create those moments internally with the team. That’s why we decided to start our team meeting with a fun activity of 10-15 minutes, where we are able to share more than just work. 

Every week, there is a different meeting owner who has to come up with new fun activities and games. Starting the meeting with this kind of ice-breaking activity brings powerful energy, and people are more engaged and effective in the sessions. I would recommend it to everyone, especially to those who think, “We already have so many things to review in those weekly meetings, we don’t have time for that.” Try it once, you’ll see how the energy and productivity are different afterward. 

On top of that, I also believe tools that encourage colleagues to randomly meet together are great. On our side we use Donut. It gives a weekly reminder that encourages employees to make it to their meeting with a colleague.

Team cohesion and employee happiness are worthwhile investments

Overall, we’ve organized six virtual summits, four company retreats, three Gorgias weeks, and hundreds of virtual coffee and fun meetings. 

At the beginning there were only 30 people in the company — now there are 200 of them. As I mentioned, it’s becoming more and more challenging to organize these meetups, but it’s also the most exciting part: making sure the next summit is better than the previous one! 

Of course, I’m aware that employee fulfillment and connection  are not the only ingredients for retention. But they are key ingredients and shouldn’t be forgotten, especially as we all become more remote. 

It’s a worthy investment to organize these events and allocate resources to them, because it makes everyone at Gorgias feel included and connected. And I have no doubt, now, that it’s part of our responsibilities in People Ops.

Customer Service Twitter

10 Best Practices for Providing Exceptional Customer Service on Twitter

By Ryan Baum
8 min read.
0 min read . By Ryan Baum

When a customer's problem goes unanswered on Twitter, you lose that customer and possibly the audience of people who watched it happen. 

It’s hard to come back from that, which is why customer care is so important on social media platforms. In fact, Shopify found 57% of North American consumers are less likely to buy if they can’t reach customer support in the channel of their choice

Your customers want to talk to you — and you should want the same, before they head to a competitor. But first, you need to build a customer support presence on Twitter that lives up to your broader customer experience.

We've helped over 8,000 brands upgrade their customer support and seen the best and worst of social media interactions. Here are our top 10 battle-tested best practices for providing exceptional Twitter support.

1. Promptly and accurately respond to tweets

Prompt response time is one of the most important pillars of great customer service, and according to data from a survey conducted by Twitter, 75% of customers on Twitter expect fast responses to their direct messages. 

Of course, responding with accurate and helpful information is ultimately even more important than responding in real time, so be sure that you don't end up providing inaccurate information in a rush to reduce your response times. 

Promptly and accurately responding to customer service issues that are sent to your company's Twitter account is often easier said than done. To do both, you need an efficient system and a well-trained customer support team. 

This is where a helpdesk is critical, to bring your Twitter conversations into a central feed with all your other tickets. 

tweets in a helpdesk feed

If you’re trying to manage Twitter natively in a browser, or through copy-paste discussions with your social media manager, you’re not going to see the first-response times you need to succeed. 

As data from Twitter's survey shows, speed is a necessity in order to meet customer expectations and provide a positive experience.

2. Move conversations out of the public space

There may be instances where customers contact your Twitter support account via a mention in a tweet as opposed to a direct message. In fact, one in every four customers on Twitter will tweet publicly at brands in the hopes of getting a faster response according to data from Twitter. In these instances, it is important to move the conversation out of the public space as soon as possible by moving the conversation to the DMs.

There are a couple of reasons you would want to avoid resolving customer service issues on a public forum. For one, keeping customer service conversations private allows you to maintain better control over your brand voice and image since customer service conversations can often get a little messy and may not be something you want to broadcast to your entire audience. 

Moving conversations out of the public space also enables you to collect more personal data from the customer such as their phone number or other contact information, details about their order and their credit card information without having to worry about privacy concerns.

In Gorgias, you can set up an auto-reply rule that responds to public support questions and directs them to send a DM for further help. This can ensure that people feel heard immediately, even if it takes a while for your team to get to their DM.

3. Don’t get into emotional arguments

Regardless of whether you are discussing an issue with a customer via your Twitter account or any other medium, it is never a good idea for your reps to get into arguments with the customer. 

Social media platforms such as Twitter tend to have a much more informal feel than other contact methods, and they also tend to sometimes bring out the worst in the people who hide behind the anonymity that they provide. You may end up finding that customers who contact you via Twitter are sometimes a little more argumentative than customers who contact you via more formal channels. 

Nevertheless, it is essential for your Twitter support reps to maintain professionalism and avoid engaging in emotional arguments with customers. It may even help to establish guidelines for your team, to help deal with this type of customer tweet. You can include rules on emoji use, helpful quick-response scripts, and whatever other priorities you have.

Recommended reading: How to respond to angry customers

4. Have a direct way for your support agent to reply to tweets

It is certainly possible to use Twitter alone when providing customer support via the platform. However, this isn't always the most efficient way to go about it. 

Keep in mind that, like other social networks, Twitter wasn't necessarily designed to be a customer support channel. There aren't a lot of Twitter features beyond basic notifications that will be able to help your team organize support tickets. 

Thankfully, there are third-party solutions that you can use that allow your support agents to respond to tweets and Twitter direct messages from your company website in a way that is much more organized and efficient. At Gorgias, for example, we offer a Twitter integration that will automatically create support tickets anytime someone mentions your brand, replies to your brand's tweets, or direct messages your brand. (By the way, we also offer integrations for Facebook Messenger and WhatsApp.)

Agents can then respond to these messages and mentions directly from the Gorgias platform, where they will show up in the same dashboard as the tickets from your other support channels. 

This integration makes Twitter customer support far more efficient for your team and is one of the most effective ways to take your Twitter customer support services to the next level.

reply to tweets within your helpdesk

5. Always respond to feedback (even if it’s negative)

It is always important to respond to all questions and feedback that customers provide via Twitter, even if that feedback is negative. This is an important part of relationship marketing.

Many brands shy away from responding to negative feedback on public forums for fear of drawing more attention to the issue. However, this doesn't usually have the desired effect. Failing to respond to negative feedback can make it seem to anyone who happens to see the tweet in question that your brand is dodging the issue. 

While you may wish to move the conversation out of the public space as soon as possible, you should always provide a public response to public feedback — negative or not. 

For examples of brands effectively responding to negative tweets, check out this article.

6. Be as personable as possible

According to data from Forbes, 86% of customers say that they would rather speak with a real human being than a chatbot. Even if you don't rely on chatbots for providing customer support, though, your customers may not be able to tell the difference unless you train your reps to be as personable as possible. 

When your reps tailor their responses and connect on a personal level, it provides a much more positive support experience that provides a halo effect to your brand. Customers will remember that the next time they arrive at the checkout button, and they might even be open to upsell opportunities at that very moment.

7. Create a tracking strategy for brand mentions

Small businesses may not struggle to keep up with brand mentions, given that there are less to track. For larger companies, though, keeping up with brand mentions can often be a difficult task. This is especially true when some users tag brands with hashtags instead of handles.

This makes it important to create an effective strategy for tracking brand mentions in an efficient and organized manner. One of the best ways to go about this is to utilize integrations that will create a support ticket anytime a customer mentions your brand in a tweet. You can even create custom views in Gorgias to centralize all of these mentions.

By tracking these brand mentions, you can also retweet positive posts for brand awareness.

Brand mentions view in Gorgias

8. Create guidelines to explain which issues you support via Twitter

Not every customer service issue can be handled via Twitter. If there are certain types of issues that fall into that category for your brand, it's a good idea to keep your customers in the loop by providing concise FAQ guidelines that explain which issues you do and don't support via Twitter. 

These guidelines can come in the form of a pinned Tweet at the top of your Twitter support account or an off-Twitter link that you provide to customers when they contact you on Twitter with an issue that requires a different medium for resolution. You could even have a visual you add when you respond to questions that don’t fit your guidelines. 

Simply responding to customers and requesting that they direct message you for further assistance is another option for addressing issues that you don't want to handle on Twitter. If you set up the auto-reply we mentioned in the second tip, above, it could even include a link to these guidelines.

Check out what this brand did when contacted on Twitter with a problem that needed to be taken off-platform in order to be resolved.

9. Consider having multiple Twitter handles for sales, marketing, and customer support

If it makes sense for your brand, it may be a good idea to create multiple Twitter handles that are designated for sales, marketing, and customer support. Creating multiple Twitter handles that serve different purposes allows you to better organize your direct messages and mentions by breaking them down into different categories. 

Having a designated customer support Twitter account can also better encourage customers to contact you via Twitter with their customer support issues since it reassures them that this is the purpose that the account serves. 

But even then, some customers will still tweet at your main account with issues. When this happens, you can use intent and sentiment analysis in Gorgias to automatically route those issues to the correct agent or team.

detect the intent behind tweets with Gorgias

10. Understand the full context of every Twitter interaction

When a customer takes the time to reach out to you on Twitter, whether it’s via direct message or a mention, it’s likely not the first time that customer has interacted with your brand. 

If you respond on Twitter, you can see the direct message history on that platform, but that’s where the context ends. With Gorgias’s Twitter integration, you can see the full customer journey, including all social media engagement, support tickets across all of your channels and even past orders.

This context is crucial to understanding the conversation you’re walking into, so you can deal with the situation appropriately. If the person is a long-time customer who engages frequently, you’re going to treat that conversation differently than that of a customer who bashes you on social networks and returns products frequently.

Break down your Twitter customer service silo

Any customer support you provide through Twitter will make things more convenient and accessible for your audience. 

But to make the experience faster and more pleasant on both sides of the conversation, you should consider handling all of your social media customer support in one platform, alongside all your other tickets. 

Gorgias ties social handles to customer profiles from your Shopify, BigCommerce or Magento store, uniting relevant conversations from across all of your support channels. All of that info is automatically pulled into your response scripts, and you can even automate the process for no-touch ticket resolution.

Check out our social media features to learn more.

Reduce Chat Widget Lighthouse Score

How We Reduced Our Chat Widget’s Lighthouse Impact From -15 Points to -1 With Simple Bundle Fixes

By Roman Fayzullin
10 min read.
0 min read . By Roman Fayzullin

Strong website performance is no longer a “nice to have” metric — it’s a critical part of your user experience.

Slow loading times and laggy pages tank conversion rates. They serve up a negative first impression of your brand and can push even your most loyal customers to greener pastures.

When we found out our chat widget had started negatively impacting our customers’ Google Lighthouse scores — an important performance metric — we immediately started searching for a solution.

Live chat is a notoriously resource-intensive category, but we were able to cut our entry point bundle in half using the process I lay out in this article. As a result, we reduced the Lighthouse score impact to just one point, compared with a control.

Here’s what we’ll cover:

Form and function of live chat widgets

Chat widgets are small apps that allow visitors to get quicker results without leaving the webpage they’re on. The chat window usually sits in the bottom corner of the screen, when open.

Here is an example:

Live chat widget example

Live chat is especially helpful on ecommerce websites, because retail shoppers expect quicker responses. Repetitive questions involving order status, return policies, and similar situations are easily resolved in chat, and it can also provide a starting point for more complex inquiries.

Because merchants make up the bulk of our customers at Gorgias, our live chat feature is a major part of our product offering.

Our live chat feature is a regular React Redux application rendered in an iframe. It may appear simple and limited, but its features extend beyond simple chat to include campaigns, a self-service portal and widget API.

We implemented code-splitting from the beginning to reduce bundle size, leaving us with the following chunks:

  • An entry point chunk, which contained React, Redux and other essential modules
  • A chat window chunk
  • A chunk with a phone number input component

Unfortunately, that initial action wasn’t enough to prevent performance issues.

Initial negative impact of our chat widget

We started hearing from merchants that the chat widget was impacting their Google Lighthouse scores, essentially decreasing page performance. As I previously mentioned, chat widgets generally have a bad reputation in this regard. But we were seeing unacceptable drops of 15 points or more.

To put those 15 points in context, here are the Google Lighthouse ranges:

  • 0 to 49 - Poor
  • 50 to 89 - Needs improvement
  • 90 to 100 - Good

So if you had a website with 95 performance points, it was considered to be “good” by Lighthouse, but the chat could take it down to “needs improvement”.

Of course, we immediately set out to find and fix the issue.

Analysis and bundle reorganization

There were several potential causes for these performance issues. To diagnose them and test potential solutions, we prioritized the possible problem areas and worked our way down the list. We also kept an open mind and looked in other areas, which allowed us to find some fixes we didn’t initially expect.

The initial entrypoint file was 195kB gzipped and the entire bundle was 343kB gzipped. By the end, we had reduced those numbers to 109kB and 308kB respectively.

Here’s what we found.

Checking for unnecessary rendered DOM elements

First, we opened a test shop with chat installed and tried to find something unusual.

It didn’t take long: The chat window chunk was loaded and the corresponding component was rendered, even if you didn't interact with the chat. It wasn't visible, because the main iframe element had a display: none property set.

The user sees a small button, but there are a lot of DOM elements from a browser's point of view

Then, we moved to the Profiler tab, where we found that the browser was using a lot of CPU, as reported:

Chat widget CPU in Profiler tab before deferring the rendering

Here's what happens if you defer rendering of this component, as originally intended:

Chat widget CPU in Profiler tab after deferring the rendering

However, this deferral introduced another issue. After clicking the button to open the chat, this window starts to appear with some delay. It's easy to explain: Previously, the JS chunk with this component was downloaded and executed immediately, while these changes caused the chunk to load only after interaction.

This problem is easily fixable by using resource hints. These special HTML tags tell your browser to proactively make connections or download content before the browser normally would. We needed a resource hint called prefetch, which asks the browser to download and cache a resource with a low priority.

It looks like this:

There's a similar resource hint called preload which basically does the same thing, but with higher priority. We chose prefetch, because chat assets are not as important as the resources of the main site.

Since we're using webpack to bundle the app, it's very easy to add this tag dynamically. We just added a special comment inside dynamic import, so it looked like this:

Though this solution didn’t affect bundle size, it significantly increased the performance score by only loading the chat when necessary.

Analyzing bundle size

Once the rendering was working as intended, we started to search for opportunities to reduce the bundle size.

Bundle size doesn’t always affect performance. For example, here you can see almost the same amount of JS, although execution times are very different:

Similar Javascript with different load times

In most cases, however, there is a correlation between bundle size and the performance. It takes the browser longer to parse and execute the additional lines of code in larger bundle sizes.

This is especially true if the app is bundled via webpack, which wraps each module with a function to execute. This isn’t a problem with just a couple of modules, but it can add up — especially once you start getting up into the hundreds.

We used a few tools to find opportunities to reduce bundle size.

The webpack-bundle-analyzer plugin created an interactive treemap, visualizing the content in all bundles

Chat widget interactive treemap with webpack-bundle-analyzer

The Coverage tab inside Google Chrome DevTools helped us see which lines were loaded, but not used. The minified code made it more difficult to use, but it was still insightful.

Coverge tab in Google Chrome DevTools

Checking that tree-shaking is working properly

Next, we discovered the client bundle included the yup validation library, which was unexpected. We use this library on the backend, but it’s not a part of the widget.

It turns out the intended tree-shaking didn't work in this situation — we had a shared file which was used by the JS client and backend. It contained a type declaration and validation object, and for some reason webpack didn't eliminate the second one.

After moving type declaration to its own file, bundle size was reduced dramatically - 48kB gzipped

Lazy loading big libraries

We also discovered the Segment analytics SDK took 37.8 kB gzipped.

Since we don't use this SDK on initial load, we created a separate chunk for this library and started to load it only when it's needed.

Separating certain libraries out of the main chunk

By looking into the chart from webpack-bundle-analyzer, we realized that it was possible to move React Router's code from the main chunk to the chunk with the chat window component. It reduced entrypoint size by 3.7kB and removed unnecessary render cycles, according to React Profiler.

We also found that the Day.js library was included in the entrypoint chunk, which we found odd. We actively use this library inside the Chat Window component, so we expected to see this library only inside the chunk related to this component.

In one of the initialization methods, we found usage of utc() and isBefore() from this library, functionality that is already present in native Date API. To parse date string in ISO format you can run new Date() and for comparison just add the < sign. By rewriting this code, we were able to reduce entrypoint size by 6.67kB gzipped. Not a lot, but it’s all starting to add up.

Finding alternatives for big libraries

Another offender was the official client of Sentry (23.4kB gzip). It is a known issue which has not been resolved yet.

One option is to lazy load this SDK. But in this case, there was a risk that we could miss errors occurring before the SDK fully loaded. We followed another approach, using an alternative called micro-sentry. It’s only 2kB and covered all functionality that we needed.

We also tried to replace React with Preact, which worked really well and decreased the bundle size by 33kB in gzip. However, we couldn't find a big difference in the final performance score.

After further discussion with the team, we decided not to use it for now. We think the React team could introduce some interesting features in new versions (for example, concurrent mode looks very promising), while it would take some time for the Preact team to adopt it there. It happened before with hooks: The stable Preact version of the React feature followed a full year later.

Finding more compression opportunities

From further inspection, we found the mp3 file used for the notification sound could be compressed using FFmpeg without a noticeable difference in sound, saving 17.5kB gzipped.

We also found that we used a TTF format for font files, which is not a compression format. We converted them to WOFF2 and WOFF formats, which reduced size by 23kb in gzip for each font file — 115kB in total.

We didn't notice any differences in performance score after these changes, but it was not a redundant exercise. With these changes, we transfer less information, using less network resources. This could be beneficial for customers with bad network connection.

Delivering chat assets from the browser cache

We already used a content delivery network (CDN) to improve the loading time, but we were able to reconfigure its cache policies to make it more efficient. Instead of downloading chat every time user visits the page, chat is downloaded via network only on a first visit, while all subsequent requests will use a version from the browser cache.

A CDN is a very good way to deliver assets to clients, because CDN providers store a cached version of chat application assets in multiple geographical locations. Then, these assets are served based on visitor's location. For example, when someone in London accesses the website with our chat, chat assets are downloaded from a server in the United Kingdom.

Results and impact of the bundle reorganization

Below, you can see how the bundle composition changed after applying the fixes we’ve mentioned. The entrypoint file was halved in size, and the total amount of JS was reduced by 35kB gzipped.

Chat widget bundle (Javascript only) after reorganization

And here’s the full chart inclusive of all chat assets, including the static assets.

Chat widget bundle overall after reorganization

To see the impact of these reductions, we performed Google Lighthouse audits on our Shopify test store using three configurations:

  • Without chat (as a control)
  • With unoptimized chat
  • With optimized chat.

We also used the mobile preset to tighten up the conditions. In this mode Lighthouse simulates mobile network and applies CPU throttling.

Here are the results:

  • Without any chat, the performance score was around 97-98 points
  • With unoptimized chat, the score dropped to around 83-85 points
  • With optimized chat, the score jumped back up to around 96-97 points
Google Lighthouse score improvements after updating chat widget

Not only did we improve on the original penalties, but we were able to get the performance score almost to the same level as when there is no chat enabled at all.

This is either in line with, or outperforming most other chat widgets we have analyzed.

Preventing future regression

To maintain the current levels of performance and impact, we added a size-limit check to our continuous integration pipeline. When you open a pull request, our CI server builds the bundle, measures its size and raises an error if it exceeds the defined limit.

When you import a function, it’s not always obvious what kind of code would be added under the hood — sometimes it's just a few bytes of code, but other times it could import a large library.

This new step makes it possible to detect these regressions in a timely manner.

Size-limit check

It's also possible to define a time limit using this tool. In this case, the tool runs a headless version of Chrome to track the time a browser takes to compile and execute your JS.

While it sounds nice, in theory, we found results from this method very unstable. There's an open issue with a suggestion on how to make measurements more stable, so hopefully we can take advantage of the time limit functionality in the future.

Think about performance before it becomes an issue

It turns out there is a lot of low-hanging fruit when it comes to performance optimization.

Just by using built-in developer tools in the browser and a plugin to generate a visual representation of the bundle, you might find a lot of opportunities to optimize performance without refactoring the whole codebase. In our case, we reduced entrypoint file size by 49% and reduced impact on the client's website significantly.

If you work on a new project, we strongly advise you to think about performance before it's too late. You can prevent the accumulation of technical debt by taking simple steps like checking bundlephobia before installing a library, adding size-limit to your build pipeline and running Lighthouse audits from time to time.

Gorgias 2021 Year in Review

Gorgias in 2021: 8,000 ecommerce brands turned support challenges into $1.1 billion

By Ryan Baum
7 min read.
0 min read . By Ryan Baum

As ecommerce grew this year, we continued to work toward a decentralized vision of commerce — a model where merchants take back their customer relationships from colossal marketplaces and connect one-to-one with the people who buy their products.

Our merchants had a record-breaking number of these personal interactions in 2021 and that’s worth celebrating. So we’ve collected all the firsts, upgrades and proudest moments to share with you.

Since January 2021 feels like 10 years ago (and also 10 minutes ago, somehow), let’s take a walk down memory lane.

  1. 8,000 brands with one thing in common
  2. 75 million chances to improve customer experience
  3. Our merchants met shoppers wherever they were
  4. Assembling the ecommerce A-Team
  5. Customer feedback drove our product roadmap
  6. Gorgias grew alongside our merchants
  7. Looking ahead to 2022

8,000 brands with one thing in common

This year, we helped 8,000 brands support over 290 million shoppers, bringing in customers like Bidabo, Biketart, Lillie's Q and Livinguard.

All together, our customers generated $1.1 billion from their customer support functions in 2021.

Those companies varied in size, from single entrepreneurs still proving their products to enterprise companies scaling beyond their wildest dreams. Differences aside, they united in prioritizing customer experience to grow their businesses.

image

Some industries came up again and again on our roster, including: 

And because Gorgias powered growth across 110 industries, our customers’ customers were purchasing everything from medical supplies to maritime essentials.

image

75 million chances to improve customer experience

Every minute of 2021, Gorgias customers closed out an average of 179 tickets. In more relatable terms, they helped more than 10,000 shoppers in the time it took to watch a new episode of Shark Tank.

At the peak of support volume — the five-day period from Thanksgiving and Black Friday through Cyber Monday (BFCM) — our merchants answered 2.5 million tickets. Their support teams drove $25.6 million in sales during that time.

With tools made for that moment, they were able to stay on top of the ticket pile and turn the holiday rush into a gold rush.

image

The impact didn’t stop there. On average, our merchants received a 4/5 satisfaction rating from their customers in 2021. The 75 million tickets they answered reinforced their brands, one loyal customer at a time.

After all, when your team has a million fires to extinguish, the only flames in customer support should be the emoji reactions to your five-star ratings.

And that’s exactly what you’ll be chasing as your performance metrics approach those from our top quartile of merchants. The top-performing teams clocked first-response times under two hours and resolution times under 8 hours, on average.

image

Our merchants met shoppers wherever they were

As ecommerce becomes more decentralized, so do the channels that provide your customer feedback.

Still, it’s no surprise that email remains the most popular support channel, used by 92% of our brands. Together, they answered 64 million emails in 2021 (85% of all tickets). 

This next stat may be more of a revelation: 78% of our brands have brought Facebook, Instagram, and/or Twitter interactions into their Gorgias workspace. They answered 3.7 million comments across those three channels, with almost two-thirds coming from Facebook.

image

These social channels were used even more than our live chat, phone, and SMS integrations. And Gorgias helped merchants meet their customers in all of the above, without ever leaving their dashboard.

Assembling the ecommerce A-Team

2021 also saw the launch of our long-awaited Gorgias App Store. This hub features 75 apps to extend the power of our helpdesk and centralize the information support agents rely on. 

image

62% of our merchants are using at least one of our partner apps, and we’re exploring new partnerships all the time to continue streamlining the customer support process. 

This allows us, and all of our partners, to stay focused on being the absolute best at what we do.

Some of our merchants’ favorite integrations include: 

  • Klaviyo: An email and SMS marketing automation platform
  • Recharge: For subscriptions and recurring payments
  • Attentive: A comprehensive text message marketing solution 
  • Postscript: SMS marketing for growing ecommerce stores
  • Yotpo: For customer reviews, loyalty, referrals, and more

image

So go ahead and close those 20 tabs out — you won’t need them where we’re headed.

Customer feedback drove our product roadmap

We released 91 features this year, 42 of which were led by your requests on our public roadmap

Our most requested features (that are all available today!) were: 

The quick adoption of our 2021 social media updates made it clear these channels were critical to our merchants’ success this year. We expect that to continue into 2022. (TikTok, anyone? Give it an upvote here!) 

And while voice support didn’t see the same volume of requests as the social channels, we knew it was essential for certain brands. To better serve these merchants, we built a native phone integration that’s easily set up for new and existing numbers.

Merchants responded by taking more than 4,000 calls from shoppers this year. As a result, resolution times were up to 34% faster than others who left phone service out of their strategies.

image

And while we want to give our merchants a variety of tools to provide help, sometimes it's best to empower shoppers to help themselves. 

Our new Help Center feature provides FAQ hubs on merchant websites, to work toward this goal. The first 100 Help Centers that went live attracted over 100,000 views, answering inquiries before they could turn into tickets.

Another contribution is perhaps our most exciting release: Our Automate product allows for customization of self-service flows and deflects even more tickets to boost team efficiency. 

Hundreds of merchants used the add-on in 2021 to automate their tickets, increasing efficiency across their support teams.

Our self-service portal alone deflected up to another 33% of tickets specific to shoppers (like order status). This freed up agent time to provide a more personal touch to important conversations. 

Gorgias grew alongside our merchants

We tripled the size of our team in 2021 to continue building the best possible helpdesk for the specific needs of ecommerce brands. There are now 185 employees who work in 16 countries around the globe and speak 18 different languages.

image

That means there’s more Gorgians building out integrations, furthering the product roadmap, and contributing to our merchants’ success.

And our customers have let us know how much these improvements impacted their businesses. We currently hold top marks among the helpdesk categories on G2, Capterra and the Shopify app store.

image

Looking ahead to 2022 

2021 was a year to remember for the Gorgias team and our customers, but 2022 is shaping up to be even better. It might even be the year people learn to pronounce our name. (470 people asked how during this year’s demos; think “gorgeous.”) 

Fingers crossed.

Either way, we have some key new features on the roadmap and several surprises up our sleeves. We’ll continue building and optimizing channels so you can meet your customers where they are (including a much-requested Whatsapp integration). We’re also going to renew our focus on automation tools to increase efficiency across your team. 

Make sure you subscribe to our newsletter, below, to beam all of our updates directly to your inbox.

As for the rest of the ecommerce industry, we have high hopes for 2022 (and plenty of predictions). We’re expecting continued shift of support tickets to social channels, a bigger emphasis on self-service options and a sharper focus on app integrations across the ecommerce ecosystem. 

Until then, thanks for a great year!

Building delightful customer interactions starts in your inbox

Registered! Get excited, some awesome content is on the way! 📨
Oops! Something went wrong while submitting the form.
A hand holds an envelope that has a webpage coming out of it next to stars and other webpages