

Four months ago, our analysts were dealing with a barrage of questions. "What's our ARR by segment?" "Build me a dashboard for this quarter's pipeline." Quick asks piled up behind complex deep dives. Stakeholders waited for answers that should have taken seconds, and analysts spent their time fielding requests instead of doing the strategic work that creates the most value.
Today, anyone at Gorgias can ask a question in plain language and get an accurate, contextualized response in seconds. Not from a colleague or dashboard, nor from a generic answer from the internet. But a response built on our business context. We call it Cortex, our flagship internal AI agent.
In two months, Cortex went from an idea to fielding thousands of questions every week, recommending actions across the business, and deprecating the need for manual dashboard creation. While most companies right now are treating AI as an initiative — at Gorgias, AI is already part of how we work. 72% of Gorgias employees use Cortex each week, and that number is only growing.
We didn’t achieve this by simply plugging a large language model into our stack. LLMs are a critical part of the equation, but they aren't the driving force — it’s everything else under the hood: the infrastructure, context, platform architecture, and the team that brings it all together.

The instinct across many companies today is to start with the model, pick a provider to solve a specific challenge, or invest heavily in getting the data right. All reasonable starting points, but most of them solve for one use case. Underneath that approach is a framing problem: seeing AI as an initiative — something you assign and measure. Seeing AI as another tool your company uses versus how your company operates.
We started somewhere different. Every company is built on four pillars: customers, people, product, and decisions. AI investments tend to place heavy emphasis on the first three. We started with the fourth. Our bet was that if we built everything around the need to make effective decisions first, asking what Gorgias needed to know to operate well, then our AI would become dramatically more powerful.
Cortex is our flagship internal AI agent, and the product where we established the tenets that now run through everything else we build: composable and modular infrastructure, governed context, and accessible from wherever decisions happen. Cortex lives in Slack, as well as across LLM vendors, in its own browser extension, and even on its own dedicated internal site.
Cortex doesn’t stop at answering questions. It can read and write to Notion, file Linear tasks, create HTML apps, automate signal delivery, and more. It operates across every layer of our stack, from dashboards to data pipelines, because we designed it as one integrated system. It is this connection that adds remarkable depth to what people can ask, and what they get in return.

A Sales Lead is pitching and asks Cortex for the full picture of the merchant. In a customized PDF, Cortex lists coverage gaps, pre-sale intent signals, and product fit options. Everything the sales lead needs to walk in with confidence.
A Senior Product leader asks, "How are we performing against OKR #1, and what can my team do to help accelerate it?" Cortex returns a full ARR breakdown, projected end-of-month attainment, segment-level findings, and connects it all back to company-level strategies. A suite of recommendations customized to the leader, the performance, and the signals that bridge how they can support our goals. The kind of answer that used to take someone a week to put together.
These aren't simple lookup queries. They require deep business context spanning multiple areas. Cortex handles these because its Decision Engine gives it the information to reason against governed data, metric definitions, and business context, turning a generic answer into a credible one.
Overnight, teams have built Cortex into how they work. They’re spending less time searching and more time finding answers, not because they were told to, but because Cortex reduced the distance between question and decision.
Cortex’s modular infrastructure allows us to experiment and add new capabilities freely. We’ve already built two more internal AI agents made for entirely different use cases, but using the same Decision Engine as Cortex.
GAIA, our internal experimentation AI Agent, helps our customers identify opportunities in their AI Agent Guidance design. It takes institutional knowledge across our teams and turns it into a scalable system that drives automation and value to our customers. Our CEO, Romain Lapeyre, has been its most vocal advocate since day one.
When we needed a platform for investor readiness and board preparation, we built Oracle. Our board decks and talk tracks are informed and built with the same AI, and our numbers are validated every step of the way.
We’re continuing to expand new AI agents internally, exploring how they can create value for customers and our own teams.
When AI handles thousands of analytical questions each week, the highest-value work for a data team shifts permanently. Late 2025, we repositioned from a Data Analytics function into a Decision Intelligence function — a structural change in what we own and how we operate.
Today, our analysts focus on the most sensitive, complex, and forward-looking decisions and analyses. They partner more deeply with stakeholders by driving next steps from signals. They're even building entirely new capabilities that didn't exist in their role descriptions months ago. Things like AI skills for Cortex, context curation, and insight and recommendation delivery. The role of the analyst hasn't diminished. It's expanded to encompass the most meaningful work an analyst can do: driving outcomes and ensuring those decisions can achieve them.

Our business support model has changed, too. Instead of embedding analysts and dedicated engineers within functional teams, we align capacity to the highest-impact company objectives and move fluidly across them. This model works even better because Decision Intelligence brings together both analytics and engineering teams under one roof.
Elliot Trabac leads our Data, Context and AI Engineering teams. The Decision Engine, Cortex, GAIA, and the platforms I've described exist because of the infrastructure his team innovated and built from the ground up. Noemie Happi Nono leads our Decision Strategy and Operations team, driving decision outcomes with stakeholders, advancing the development of Cortex skills and capabilities, and pushing into new areas of analysis every day.
Together, they're shaping what a modern data function looks like when AI becomes a standard building block for how a company operates.
The question of ROI is long gone. AI has opened the floodgates to more trusted and meaningful signals than ever. The natural next evolution is Proactive Intelligence, signals surfaced toward what you need to know, before you ask. And we're already building this because our architecture is designed to support it.
In the coming weeks, members of the Decision Intelligence team will go deeper into themes I've touched on here. Yochan Khoi, a Senior Analytics Engineer on our team, recently published a technical walkthrough of our context layer and will go further into building context strategies that scale. Others will cover infrastructure, analytical partnerships, evolving data assets into decision assets, and the cost and efficiency gains that make sustained AI investment viable.
AI hasn't changed the most important element of data and analytics functions — delivering outcomes — but it has raised the bar for what it looks like and how far we can take it. We’re just getting started.
TL;DR:
The way shoppers buy online has shifted and customers are at the center.
They no longer want to scroll through product pages, dig through FAQs, or wait 24 hours for an email reply. They open a conversation, ask a specific question, and expect a useful answer in seconds. Brands that can’t deliver these experiences at scale are seeing customer hesitation turn into abandoned carts and lost revenue.
This shift has a name: conversational commerce. It's the practice of using real-time, two-way conversations as your primary sales channel, through chat, AI agents, messaging apps, and voice.
What started as an experiment for early adopters has become a key growth lever, with 84% of ecommerce brands treating conversational commerce as a strategic pillar this year vs. last year.

We surveyed 400 ecommerce decision-makers across North America, the U.K., and Europe to understand how conversational commerce and AI are reshaping the ecommerce landscape. These findings are complemented by aggregated and anonymized internal Gorgias platform data from 16,000+ ecommerce brands.
The State of Conversational Commerce in 2026 trends report breaks down all of the findings, including five key trends shaping the ecommerce landscape.
{{lead-magnet-1}}
A few years ago, adding an AI chatbot to your site that could provide tracking links and Help Center article recommendations was a differentiator. Today, it's table stakes. McKinsey found that 71% of shoppers expect personalized experiences, and 76% get frustrated when they don't get them.
Right now, most ecommerce professionals use AI, with 93% having used it for at least 1 year. Enthusiasm is accelerating quickly, with only 30% of ecommerce professionals rating their excitement for AI at 10/10 in April 2025. Similarly, while AI adoption rose steadily year over year, it reached a clear peak in 2026.

The use cases driving this adoption are practical and high-volume:

These are the tickets that flood brands’ inboxes every day. AI agents resolve them instantly, without pulling teams away from conversations that actually require human judgment.
Explore AI adoption and use case data in more depth in the full report.
The traditional ecommerce funnel, visit site, browse products, add to cart, check out, is losing ground. Shoppers now discover products on Instagram, ask questions via direct message, and complete purchases without ever visiting a website.

Conversational AI is actively increasing revenue, with 79% of brands reporting that AI-driven interactions have increased sales and conversion in their business.

The practical implication is that every channel is becoming a storefront. Creating personalized touchpoints with customers earlier in the journey, through proactive engagement, is impacting the bottom line.
Read the full report to explore how AI conversions have increased QoQ by industry.
Pre-purchase hesitation is one of the biggest conversion killers in ecommerce. A shopper lands on your product page, has a question about sizing or compatibility, can't find the answer quickly, and leaves. That's a lost sale that had nothing to do with your product.
Conversational AI changes that dynamic. When a shopper can ask a question and get an accurate, personalized answer in real time, the friction disappears.
Brands using Gorgias saw this play out at scale in 2025. When AI Agent recommended a product, 80% of the resulting purchases happened the same day, and 13% happened the next day.

Brands are further accelerating the buying cycle through proactive engagement. On-site features such as suggested product questions, recommendations triggered by search results, and “Ask Anything” input bars drove 50% of conversation-driven purchases during BFCM 2025.
Explore how AI is collapsing the purchase cycle in Trend 3 of the report.
There's a persistent narrative that AI is making CX teams redundant. The data tells a different story. 62% of ecommerce brands are planning to grow their teams, not cut them. But the scope of those teams is changing.

New roles are emerging around AI configuration and quality assurance. Teams are investing in technical members to write AI Guidance instructions, develop tone-of-voice instructions, and continuously QA results.
CX teams are also bridging the gap between support goals and revenue goals, as the two functions increasingly overlap.

The result is CX teams that are more technical than they were before. Agents who once spent their days answering repetitive tickets are now spending that time on higher-value work: complex escalations, VIP customer relationships, and improving the AI systems and knowledge bases that handle the volume.
Learn more about the evolution of CX roles in Trend #4.
Despite increasing AI adoption, data shows that ecommerce brands shouldn’t strive for 100% automation. Winning brands are building systems in which AI handles repetitive tier-1 tickets, and humans handle complex, sensitive cases.

AI handles speed and scale. It resolves order-tracking requests at 2 a.m., processes return-eligibility checks in seconds, and answers the same shipping question for the thousandth time without compromising quality.
Human agents handle conversations that require context, empathy, or decisions that fall outside the standard playbook. There are several topics where shoppers still prefer human support.

Successful hybrid systems require continuous iteration, meaning reviewing handover topics, Guidance, and reviewing AI tickets on a weekly basis.
Discover how leading brands are balancing human and AI systems in Trend #5.
The 2026 trends are about expansion and standardization. The 2030 predictions are about what comes next.

Voice-based purchasing is the biggest bet on the horizon. Only 7% of brands currently use voice assistants for commerce, but 89% expect it to be standard by 2030. The vision is a customer who can reorder a product, check their subscription status, or manage a return entirely over the phone.
Proactive AI is the other major shift. Rather than waiting for a customer to reach out, AI will anticipate needs based on browsing behavior, purchase history, and where someone is in their relationship with your brand. Think of it as the digital equivalent of a sales associate who remembers what you bought last time and knows what you're likely to need next.
Explore where ecommerce brands are allocating their AI budgets in the full report.
The brands winning in 2026 are creating smart, scalable systems where AIhandles volume and humans handle nuance. They’re treating every conversational channel as an opportunity to serve and sell.
The data is clear: AI adoption is accelerating, customer expectations are rising, and the revenue impact of getting this right is measurable.
{{lead-magnet-1}}
The best in CX and ecommerce, right to your inbox

TL;DR:
The page-based shopping experience dominated for decades. Customers would search, browse, compare, abandon, get retargeted, return, and eventually buy (sometimes).
That journey is no longer the only option.
Shoppers are turning to chat, messaging, and AI-powered tools to find what they need. Instead of clicking through product pages or reading static FAQs, they ask questions, have back-and-forth conversations, and get answers that move them closer to a purchase in real time. The path to checkout has changed, and the brands that recognize this are pulling ahead.
Read our 2026 State of Conversational Commerce Report to learn more about conversation commerce trends from 400 ecommerce decision-makers and 16,000+ ecommerce brands using Gorgias.
{{lead-magnet-1}}
The traditional shopping journey was a solo experience. A shopper had a need, searched for options, browsed across sessions, and eventually made a decision — often days later, after being retargeted multiple times. Support only entered the picture after the purchase.

The conversation-led journey collapses that timeline:
What used to take days now takes minutes. Discovery, evaluation, and purchase happen in a single thread.
79% of brands agree that AI-driven conversational commerce has increased sales and purchase rates in their business. When brands were asked to rank the highest-return areas:
Those numbers reflect something important: the value of conversation compounds. Faster support reduces friction. Better retention raises lifetime value. More confident shoppers buy more often and spend more per order.
The brands seeing the biggest returns aren't just using AI to deflect tickets. They're using it to create one-to-one shopping experiences at scale.
Looking at AI-only influenced orders across key verticals like Apparel and Accessories, Food and Beverages, Health and Beauty, Home and Garden, and Sporting Goods, the growth across a single year was significant.





Across industries, ecommerce brands saw AI step into conversations, reduce shopper hesitation, and drive higher QoQ conversion rates.
Learn more about AI-powered revenue generation in the full 2026 Conversational Commerce Report.
84% of brands say the strategic importance of conversational commerce is higher than it was a year ago. 82% agree it will be mainstream in their sector within two years.

That shift is registering at the leadership level because of what conversational commerce does to the buying experience. Creating one-to-one touchpoints earlier in the journey drives higher AOV, shorter buying cycles, and stronger purchase rates. Shoppers who get real-time answers to their questions are more confident.
TUSHY, known for eco-friendly bidets and bathroom essentials, is a useful example of what happens when you take conversational commerce seriously.
Bidets aren't an impulse purchase. Shoppers have real questions about fit, compatibility, and installation. Those questions used to go unanswered until the CX team could respond, often after the customer had abandoned the cart.
TUSHY used Gorgias's AI Agent and shopping assistant capabilities to automate pre-sales support. AI Agent engaged shoppers in real-time conversations, addressed their concerns directly, and built confidence at the moment of highest intent.
This resulted in a 190% increase in chat-based purchases, a 13x return on investment, and twice the purchase rate of human agents.
You don't need to overhaul your entire operation to start seeing results. The most effective approach is to start where the impact is clearest and expand from there.
A few places to begin:
Want to see the full picture of where conversational commerce is headed in 2026? Read the full report to explore the data, trends, and strategies shaping the next era of ecommerce.
{{lead-magnet-1}}

TL;DR:
A year ago, ecommerce brands were still debating whether AI was worth the investment. That debate is over. Today, nearly every ecommerce professional uses AI to do their job.
The shift isn't just about adoption. It's about what AI is used for and how brands measure its impact. Support automation was the entry point. Now, AI is embedded across the full operation, from product recommendations to inventory control to real-time shopping conversations.
In our 2026 State of Conversational Commerce Report, we break down trends on AI usage among 400 ecommerce decision-makers and 16,000+ ecommerce brands using Gorgias.
{{lead-magnet-1}}
If we rewind 12 months ago, the industry was still split on AI. Some ecommerce professionals were excited, but most were still hesitant. In 2024, 69% of ecommerce professionals used AI in their roles. By 2025, that number reached 77%. In 2026, it hit 96%.

The confidence numbers back it up. 71% of brands say they are confident using AI for ecommerce, and 73% are satisfied with its business impact.
In early 2025, only 30% of ecommerce professionals rated their excitement for AI at 10/10. Today, zero percent of respondents describe themselves as hesitant about AI.

Using AI in ecommerce is not new. In fact, it dates back to the 1980s with the invention of algorithms and expert systems. And if you’ve ever leveraged similar product recommendations or chatbots, you’ve already integrated AI into your ecommerce stack.
Modern AI is far more sophisticated.
With the rise of agentic commerce and conversational AI, brands began leveraging AI agents to automate the processing of repetitive support tickets. That’s still happening today, but the scope has expanded beyond the support queue.

Ecommerce brands are deploying AI across every layer of their operation:
When brands were asked which channels contribute most to their AI success, conversational channels dominated. Social media messaging led at 78%, followed by SMS at 70%, and website live chat at 51%. Shoppers want fast, personal conversations, and AI is the best way to deliver that at scale.
Learn more about AI adoption, perception, and use case trends in the full 2026 Conversational Commerce Report.
For decades, customer support success meant fast response times and high satisfaction scores. Those are still important indicators of success, but leading brands are adding revenue-focused metrics to their dashboards.
91% of brands still track CSAT as a measure of AI's impact. But 60% now include AOV as a top indicator, and higher-revenue brands earning $20M+ are focusing on metrics like total operating expenses, cost per resolution, incremental revenue, and one-touch ticket rate.

AI can now start a conversation, ease customer doubts, sell, upsell, and recover abandoned carts in a single conversation. When you’re only measuring CSAT, you’re ignoring the real ROI of conversational AI investment.
Virtual shopping assistants now proactively engage shoppers, adapt to their needs in real time, and offer contextual product recommendations and upsells. When the moment calls for it, they can close the deal with a targeted discount.
Gorgias brands using AI Agent's shopping assistant capabilities nearly doubled their purchase rates and converted 20–50% better than those using AI Agent for support only.
Orthofeet, the largest provider of orthopedic footwear in the US, is a concrete example of this in practice. Using Gorgias, they achieved:
The data tells a clear story: AI has evolved beyond a tool for handling tier 1 support tickets. It’s a core part of your revenue generation strategy.
57% of brands are already using AI for 26–50% of all customer interactions, and 37% expect that share to rise to 51–75% within the next two years. The brands building toward that range now are the ones who will have the operational advantage when it matters most.
The practical question isn't whether to invest in AI. It's where to focus first. Based on where brands are seeing the most impact, three priorities stand out:
Want to go deeper on the full 2026 conversational commerce trends? Read the complete report for data across every major AI use case in ecommerce.
{{lead-magnet-1}}

TL;DR:
Customer education has become a critical factor in converting browsers into buyers. For wellness brands like Cornbread Hemp, where customers need to understand ingredients, dosages, and benefits before making a purchase, education has a direct impact on sales. The challenge is scaling personalized education when support teams are stretched thin, especially during peak sales periods.
Katherine Goodman, Senior Director of Customer Experience, and Stacy Williams, Senior Customer Experience Manager, explain how implementing Gorgias's AI Shopping Assistant transformed their customer education strategy into a conversion powerhouse.
In our second AI in CX episode, we dive into how Cornbread achieved a 30% conversion rate during BFCM, saving their CX team over four days of manual work.
Before diving into tactics, understanding why education matters in the wellness space helps contextualize this approach.
Katherine, Senior Director of Customer Experience at Cornbread Hemp, explains:
"Wellness is a very saturated market right now. Getting to the nitty-gritty and getting to the bottom of what our product actually does for people, making sure they're educated on the differences between products to feel comfortable with what they're putting in their body."
The most common pre-purchase questions Cornbread receives center around three areas: ingredients, dosages, and specific benefits. Customers want to know which product will help with their particular symptoms. They need reassurance that they're making the right choice.
What makes this challenging: These questions require nuanced, personalized responses that consider the customer's specific needs and concerns. Traditionally, this meant every customer had to speak with a human agent, creating a bottleneck that slowed conversions and overwhelmed support teams during peak periods.
Stacy, Senior Customer Experience Manager at Cornbread, identified the game-changing impact of Shopping Assistant:
"It's had a major impact, especially during non-operating hours. Shopping Assistant is able to answer questions when our CX agents aren't available, so it continues the customer order process."
A customer lands on your site at 11 PM, has questions about dosage or ingredients, and instead of abandoning their cart or waiting until morning for a response, they get immediate, accurate answers that move them toward purchase.
The real impact happens in how the tool anticipates customer needs. Cornbread uses suggested product questions that pop up as customers browse product pages. Stacy notes:
"Most of our Shopping Assistant engagement comes from those suggested product features. It almost anticipates what the customer is asking or needing to know."
Actionable takeaway: Don't wait for customers to ask questions. Surface the most common concerns proactively. When you anticipate hesitation and address it immediately, you remove friction from the buying journey.
One of the biggest myths about AI is that implementation is complicated. Stacy explains how Cornbread’s rollout was a straightforward three-step process: audit your knowledge base, flip the switch, then optimize.
"It was literally the flip of a switch and just making sure that our data and information in Gorgias was up to date and accurate."
Here's Cornbread’s three-phase approach:
Actionable takeaway: Block out time for that initial knowledge base audit. Then commit to regular check-ins because your business evolves, and your AI should evolve with it.
Read more: AI in CX Webinar Recap: Turning AI Implementation into Team Alignment
Here's something most brands miss: the way you write your knowledge base articles directly impacts conversion rates.
Before BFCM, Stacy reviewed all of Cornbread's Guidance and rephrased the language to make it easier for AI Agent to understand.
"The language in the Guidance had to be simple, concise, very straightforward so that Shopping Assistant could deliver that information without being confused or getting too complicated," Stacy explains. When your AI can quickly parse and deliver information, customers get faster, more accurate answers. And faster answers mean more conversions.
Katherine adds another crucial element: tone consistency.
"We treat AI as another team member. Making sure that the tone and the language that AI used were very similar to the tone and the language that our human agents use was crucial in creating and maintaining a customer relationship."
As a result, customers often don't realize they're talking to AI. Some even leave reviews saying they loved chatting with "Ally" (Cornbread's AI agent name), not realizing Ally isn't human.
Actionable takeaway: Review your knowledge base with fresh eyes. Can you simplify without losing meaning? Does it sound like your brand? Would a customer be satisfied with this interaction? If not, time for a rewrite.
Read more: How to Write Guidance with the “When, If, Then” Framework
The real test of any CX strategy is how it performs under pressure. For Cornbread, Black Friday Cyber Monday 2025 proved that their conversational commerce strategy wasn't just working, it was thriving.
Over the peak season, Cornbread saw:
Katherine breaks down what made the difference:
"Shopping Assistant popping up, answering those questions with the correct promo information helps customers get from point A to point B before the deal ends."
During high-stakes sales events, customers are in a hurry. They're comparing options, checking out competitors, and making quick decisions. If you can't answer their questions immediately, they're gone. Shopping Assistant kept customers engaged and moving toward purchase, even when human agents were swamped.
Actionable takeaway: Peak periods require a fail-safe CX strategy. The brands that win are the ones that prepare their AI tools in advance.
One of the most transformative impacts of conversational commerce goes beyond conversion rates. What your team can do with their newfound bandwidth matters just as much.
With AI handling straightforward inquiries, Cornbread's CX team has evolved into a strategic problem-solving team. They've expanded into social media support, provided real-time service during a retail pop-up, and have time for the high-value interactions that actually build customer relationships.
Katherine describes phone calls as their highest value touchpoint, where agents can build genuine relationships with customers. “We have an older demographic, especially with CBD. We received a lot of customer calls requesting orders and asking questions. And sometimes we end up just yapping,” Katherine shares. “I was yapping with a customer last week, and we'd been on the call for about 15 minutes. This really helps build those long-term relationships that keep customers coming back."
That's the kind of experience that builds loyalty, and becomes possible only when your team isn't stuck answering repetitive tickets.
Stacy adds that agents now focus on "higher-level tickets or customer issues that they need to resolve. AI handles straightforward things, and our agents now really are more engaged in more complicated, higher-level resolutions."
Actionable takeaway: Stop thinking about AI only as a cost-cutting tool and start seeing it as an impact multiplier. The goal is to free your team to work on conversations that actually move the needle on customer lifetime value.
Cornbread isn't resting on their BFCM success. They're already optimizing for January, traditionally the biggest month for wellness brands as customers commit to New Year's resolutions.
Their focus areas include optimizing their product quiz to provide better data to both AI and human agents, educating customers on realistic expectations with CBD use, and using Shopping Assistant to spotlight new products launching in Q1.
The brands winning at conversational commerce aren't the ones with the biggest budgets or the largest teams. They're the ones who understand that customer education drives conversions, and they've built systems to deliver that education at scale.
Cornbread Hemp's success comes down to three core principles: investing time upfront to train AI properly, maintaining consistent optimization, and treating AI as a team member that deserves the same attention to tone and quality as human agents.
As Katherine puts it:
"The more time that you put into training and optimizing AI, the less time you're going to have to babysit it later. Then, it's actually going to give your customers that really amazing experience."
Watch the replay of the whole conversation with Katherine and Stacy to learn how Gorgias’s Shopping Assistant helps them turn browsers into buyers.
{{lead-magnet-1}}

There are now over 85 incredible integrations in the Gorgias App Store with the tools that power your ecommerce store. While each app is unique, together these integrations can help your agents work more efficiently to provide excellent service to your customers.
Take a look at the newest additions so far from 2022.
In the first half of the year, we’ve launched 15 new integrations for your Gorgias helpdesk:
Read on to learn how you can use these tools to help manage your store, and visit the Gorgias App Store to activate them today!

Klaviyo is an email and SMS marketing automation platform built for ecommerce. Gorgias was the first helpdesk to connect to Klaviyo SMS, allowing your brand to create seamless conversations between your marketing campaigns, shoppers, and support team.
With the updated Klaviyo integration, you can:
This integration helps you streamline customer interactions and create higher-converting marketing campaigns. To learn more, go to the Gorgias App Store.

We recently released Gorgias SMS, an easy way for your brand to offer this convenient and conversational communication channel. It’s one of the fastest-growing support channels for ecommerce brands, and one of the most reliable for customers to contact you on (since it’s not dependent on internet access).
With Gorgias SMS, you can:
Click here to learn more about Gorgias SMS, available with all plans.

Thankful AI is a platform dedicated to helping you deliver better support for the post-purchase needs of your customers. The AI is tailored specifically for retail and ecommerce businesses, so you don’t have to worry about a disjointed experience.
With this integration, the Thankful AI agent can:
This frees up your agents to focus on more meaningful conversations with customers. Visit the Gorgias App Store to learn more about the Thankful integration.

NetSuite is a cloud ERP including financials, CRM, and ecommerce. It helps brand work more efficiently, take control of inventory and fulfillment, and bring all your tools together in a unified business management suite.
Sync NetSuite data into Gorgias to give your agents important customer & order information in a single tab.
With this integration, you can:
This helps your agents have all the context they need next to every conversation they have. Visit the Gorgias App Store to learn more.

Okendo is a customer marketing platform and an Official Google Reviews partner that helps brands capture and showcase high-impact social proof such as product ratings & reviews, customer photos & videos, and Q&A messageboards.
With this integration, you can:
Visit the Gorgias App Store to learn more about our Okendo integration.

Link Narvar Return & Exchanges for Shopify with Gorgias to automate returns management and get rich insights that help you save costs and improve operations.
With this integration, you can:
To learn more about our Narvar integration, visit the Gorgias App Store.

Skio helps brands on Shopify sell subscriptions. With this integration, you can add a Skio widget to your Customer Sidebar in Gorgias. This gives your agents insights into customer subscriptions right in the helpdesk without having to switch tabs.
With this integration, you can:
To learn more about our Skio integration, visit the Gorgias App Store.

Via is a mobile commerce (SMS marketing) platform for ecommerce businesses. Send personalized messages to your customers for increased revenue and customer satisfaction.
With this integration, you can:
Visit the Gorgias App Store to learn more.

With Clyde and Gorgias working together, you can create a seamless and positive support experience by syncing all warranty data inside your Gorgias account. Stay focused and close tickets faster by viewing Clyde contracts and claims information in the same window you use to talk to customers.
With this integration, you can:
Manage warranty requests & find claims information in one tool. Head to the Gorgias App Store to learn more.

Smartrr is a seamless, full-service subscription solution. Paired with Gorgias, you can equip your team with the best customer service tools in one convenient location to increase customer satisfaction and drive customer loyalty.
With this integration, you can:
To learn more about our Smartrr integration, go to the Gorgias App Store.

ShipMonk is an order fulfillment platform for eCommerce businesses ready to scale. They offer technology-driven fulfillment solutions that enable business founders to devote more time to the things that matter most in their businesses.
With this integration you'll be able to:
Learm more about the ShipMonk integration in the Gorgias App Store.

Annex Cloud is a cloud-based customer loyalty platform for enterprises. They provide integrated loyalty, engagement, and retention solutions across a range of program types like paid memberships, incentives, and more.
With this integration, you can:
Click here to learn more about our Annex Cloud integration.

Daton can replicate Gorgias data to your data warehouse in minutes, freeing up your analysts to focus on generating important business insights instead of extracting data.
With this integration, you can sync information from Gorgias to your data warehouse like:
To learn more about Daton, visit the listing in the Gorgias App Store.

Shogun is a headless ecommerce platform built for merchants. Convert more with richer merchandising and sub-second store speed. The Gorgias integration allows merchants to add chat capabilities to their Shogun-powered shops.
With Gorgias chat on your Shogun Frontend, you can:
Click here to learn more about our integration with Shogun Frontend.

Gobot helps fast-growing Shopify stores convert more shoppers and reduce support burden with beautiful guided selling quizzes and AI-powered support chatbots.
With this integration, you can:
Visit the Gobot listing in the Gorgias App Store to learn more.

Shop2app is a mobile app builder. It’s designed for local delivery, national delivery, and in-store pickup, and also makes it easy to manage subscriptions and send push notifications to customers.
With this integration, you can:
Visit the Shop2app listing in the Gorgias App Store to learn more.
The Gorgias App Store features 85+ high-quality integrations with other leading ecommerce tools. By connecting the apps that power your store, you can give your agents the context they need to provide remarkable customer service from a single workspace. (No more switching tabs!)
To add any of these apps to your helpdesk, go to Settings > Integrations or visit the Gorgias App Store.

Each month, our product team holds a casual, conversational event with our customers to demo new features, receive real-time feedback, and answer live Q&As.
Watch the video recap here, or read on for a recap of the latest releases.
With this new channel, you can receive and respond to SMS and MMS messages within Gorgias. This makes it easy for your customers to communicate with your store while they’re on the go, and easy for your agents to provide fast, conversational support.

We’re releasing SMS this quarter as a free trial for every customer on every plan. Conversations will count toward your plan’s ticket count, but there are no additional charges for minutes, usage, phone numbers, etc. In the coming months, we’ll be assessing the best way to provide Voice and SMS so we can continue to innovate and build powerful new features for these channels.
If you want customers to consent to receive SMS messages before your agents actually reply, you can do this with a simple Rule in Gorgias. Here’s what it would look like:

Read this article for four more Gorgias Rules to help automate SMS.

This is especially great for anyone who gets tickets assigned to them, but may not be looking at Gorgias throughout their entire workday. (Think managers, social media collaborators, etc.)
To see these notifications, you may need to adjust your browser and/or computer settings. You can see an example for Chrome + Mac in our official Product Update.
Quick response flows bring in a critical component to self-service, creating more ways to engage with shoppers who visit your store online. We designed quick response flows with the guidance that 60% of the time, customers use chat to ask pre-purchase questions. Most successful merchants leverage their FAQ content to prompt conversation with quick response flows that result in generating revenue, trust and loyalty.
If you haven’t yet activated quick response flows, you’re in for a treat. With this revamp, you can now easily manipulate every step of the experience for quick response flows from self-service settings. Immediately under the Quick Response Flows tab, you can write in any question and answer you prefer and hit save. There is no other place or screen you’d need to navigate. Using the preview on the right, you can reassure the quality of the experience you want to create for your customers.

If customers click on a quick response flow and find the information they need, this will not count towards your monthly ticket volume.
If they click on a quick response flow and select “No, I need more help” option, it will create a ticket for an agent to address.
It’s amazing when our merchants start using a feature and take it to the next level. We’ve seen some of the best practices to include creating unique tags for each quick response flow created (e.g. Quick_Response_Flow_1), then adding a corresponding view in Tickets. This way, you can track closely the conversations prompted by quick response flows and dedicate a select group of agents who are trained to expand on the subject and help your customers become fans. For more on this subject, check out Quick Response Flows help doc here.
Tune into that timestamp if you want the full 25 minutes of customer-led questions and answers from our product team. Here were a few of the highlights!
Gorgias phone is an easy way to add a basic phone line to your store. If you’re looking for advanced, full call center features, our partners like Aircall or RingCentral may be a better solution for you.
For example, their phone-specific statistics are more in-depth than ours, but the ability to create a phone number and answer it in the Gorgias helpdesk is naturally easier with Gorgias.
Our long-term vision for Gorgias Phone is not to fully compete with apps like Aircall, but rather to invest in ecommerce-specific solutions so you can provide the best voice support to your shoppers.
It’s our next new channel, coming Q3! We have access to the API and are ready to start building at the end of the quarter. (Just need to polish up a few existing channel bugs first.)
Not yet, but we’d love to hear more feedback about this if it’s something you’re interested in! Submit this idea on our Product Roadmap to help us prioritize it.
That completes our recap of our May customer product event. We hold these events once as a month as a way to review the latest releases and connect with our customers in real-time. It’s a favorite – from both customers, and the Gorgias team.
If you’d like to sign up for the next one to attend live, you can register here. We’d love to have you join us!

Wondering if your team should add voice support to your ecommerce channels this year? You’re not alone.
Over 15% of our customers currently have a phone integration added to their account, thanks to the Gorgias Voice integration and partners like Aircall and RingCentral.
While voice support may feel like an “outdated” channel in the age of live chat and social media, this tells us that ecommerce support teams are increasingly finding value in offering it to their clients.
Here are 4 benefits of adding voice support to your ecommerce store:
Phones are an immediate communication channel, so it’s not surprising that adding voice support can boost your first response time. What we weren’t expecting, however, was by how much:
Our customers with phones have a first response time that’s 7x faster than merchants that don’t offer voice support. (30 minutes compared to 4 hours.)
What’s even more important to note, however, is that adding voice support doesn’t decrease resolution time (like many support managers fear). In fact, it makes quite a positive impact:
Our merchants using phones have an average resolution time that’s 34% faster than customers who don’t.
So not only does this channel help you respond to customers faster, but it helps you resolve their issues faster. That means your team can work more efficiently and spend up to 66% less time resolving each ticket. (Imagine how that could help increase your store’s revenue!)
Talking (literally) to shoppers and hearing their tone of voice is the best way your agents can adjust their responses to create a great customer experience.
While you can do your best to read clues in email and chat, it’s always going to be easier to match the customer’s tone when actually listening to them on the phone.
And when your agents can express empathy and solve the problem accordingly, you’ve got a better chance at getting that 5-star review and positive customer feedback.
Our customers using phones have an average Satisfaction score of 4.56 out of 5.
While that score also depends a lot on your support agents and their personal approach to customer service, there’s no denying that actually speaking to clients is helpful for both parties in those moments.
Especially if you sell high-end products or have VIP customers (like wholesalers buying in bulk), having a phone number adds a level of legitimacy to your business.
Since most online stores don’t immediately add phones as a support channel, it will stand out to customers when your shop does offer voice support.
Phones add a sense of maturity to your business (and especially if you’re using an integrated solution like Gorgias Voice), there’s not much cost involved to elevate the status of your store like this.
While the internet has come a long way over the years in terms of accessibility, the truth remains that phone support may be an easier and more comfortable contact method for some of your customers than digital channels.
Test your live chat experience with a screen reader, for example. What’s the experience like? (And how does it compare to dialing a phone number and talking verbally to someone?)
If there’s a chance that voice support is more approachable for a part of your customer demographic, you’ll create a better shopping experience for them by adding a phone line.
The first thing you’ll need to decide is who on your team will actually be answering the phones.
A few options to explore:
Next, you’ll need to choose a phone platform.
If you’re adding our built-in voice channel to your Gorgias helpdesk, all you have to do to get started is log into your Gorgias helpdesk and create a new number (or forward or port an existing one, if you happen to have one already).

Our phone integration is included in all Gorgias plans, and unlike other providers, there’s no annual contract fee and no minimum seat requirement.
This makes it a great option for teams looking to add phones for the first time or who want to manage all communication channels in one place.
Plus, our ecommerce integrations save your agents time by displaying callers’ shopping history right in the helpdesk, so they don’t have to go searching for the last order, for example.

For more tips on how to create efficient phone processes and increase resolution time by 34%, check out this article.
Finally, once you’ve set up your team and chosen your provider, all that’s left to do is make your number visible.
If you’re offering voice support for all your customers, you might place it in the footer of your website or all transactional emails.
If you’re piloting voice support or using it exclusively for a segment of shoppers, you might save it for smaller email segments or place it only on dedicated landing pages just for them.
Wherever you decide to put your number, just make sure it's easily accessible and clearly visible so your shoppers can start calling, and your support team can start delivering even better customer experiences!

SMS is a convenient way for customers to contact your brand and receive fast support. It’s no wonder it’s one of the top five channels that consumers expect to engage with brands, alongside email, voice, website, and in-person.
Every Gorgias plan now includes two-way SMS at no additional cost, making it easy for your brand to start offering this conversational channel.
There are many reasons to offer customer service messaging, but here are the top four:
SMS is a conversational, real-time channel. The benefit of this is that customers tend to keep the conversation short and reply quickly to follow-up questions, meaning your agents can resolve the situation quickly, too.
Most people keep their phone with them everywhere they go. With SMS, it’s easy for customers to start the conversation and follow-up as they move throughout their day, instead of feeling stuck to a chat conversation on their laptop.
Sending text messages feels like you’re texting a friend, even if it’s actually between customers and your brand. Younger clientele will feel natural using this support channel, and it can even help you build that friendly-feeling into your brand perception.
Does your refund or return policy require photo evidence to kick off the process? If your customers ever need to send pictures of damaged items or wrong products, SMS is the perfect channel because they’re probably taking those photos on their phone anyway.
Still not sure if SMS is a support channel your brand should prioritize? Try it for 2 weeks. Because SMS is included in every Gorgias plan, it’s easy to turn off if you decide it isn’t right.
Recommended reading: Our list of 60+ fascinating customer service statistics.
You’ll need two things to get started with Gorgias SMS. (Don’t worry, they’re both quick!)
If you’re new here, get started on the Gorgias helpdesk. It only takes a few minutes to create an account, and you can always book a call with our sales team if you have questions.

The second is a Gorgias-owned phone number, meaning you either created it in Gorgias or ported it from your previous phone provider. You can do both of these actions in Settings > Phone Numbers.
Note: SMS is currently only available for US, UK, and Canadian numbers.
Once your phone number is ready in Gorgias, you can add the SMS integration to it. You can do this from Settings > Integrations > SMS.
Once the integration is active, you’re ready to start replying to SMS conversations from your customers.

To tell your customers they can now text your brand, we recommend adding “Text us,” plus your phone number, in some or all of these places:
Below are four top automation rules to take full advantage of SMS customer service. We also have a full guide on customer service messaging that includes templates and macros to upgrade your SMS support.
SMS is an official channel in Gorgias, meaning you can see SMS-specific stats or create SMS-specific Views out of the box. There may be times when you also want to Tag tickets with “SMS” however, in which case you can do so with a Rule like this:

SMS is a fast, conversational channel, so you’ll want to assign these tickets to agents that can keep up with the pace. If you have a dedicated chat team, they’ll be naturals at answering questions via SMS, as well. Here’s a Rule that will automatically assign SMS tickets to a specific team.

When customers text your brand, they’ll expect a fast response. In order to buy your agents some time, we recommend sending an auto-response to let the customer know their message has been received and an agent will be with them shortly. This will also give them confidence that the text message did in fact go through, so they don’t follow-up right away.

Whenever you add a new communication channel for your customers, you should consider how you’ll respond to WISMO (“Where is my order?”) questions on it. With SMS, you’ll want to keep the length of your reply in mind so you’re not sending an insanely long text message back to customers. We recommend creating a Rule that can A) make sure the reply follows the best format for SMS and B) save your agents from having to answer these WISMO questions manually.

Gorgias SMS empowers your brand to keep the conversation going on SMS, even when your customers are on the go.
We also integrate with SMS marketing apps, making it easier for agents to answer promotion replies from one workspace. They can work more efficiently while turning SMS questions into opportunities for better customer value.
In the Gorgias App Store, you’ll find some of the top ecommerce integration partners like Klaviyo, Attentive, Postscript, and more.
If your brand is using any of these apps to drive sales via SMS, we highly recommend integrating with Gorgias so your team can work more efficiently toward your revenue goals. When SMS marketing and SMS customer service work in tandem, they are far more powerful.
Want to see an example of a brand that successfully launched SMS customer support and effectively drove customers to use the new channel? Check out our playbook of Berkey Filters, an ecommerce merchant that did just that.
Ready to get started with this conversational support channel? Add SMS to your Gorgias helpdesk today or book a call with our team to learn more.

As we all locked down in March 2020 and changed our shopping habits, many brick-and-mortar retailers started their first online storefronts.
Gorgias has benefitted from the resulting ecommerce growth over the past two years, and we have grown the team to accommodate these trends. From 30 employees at the start of 2020, we are now more than 200 on our journey to delivering better customer service.
Our engineering team contributed to much of this hiring, which created some challenges and growing pains. What worked at the beginning with our team of three did not hold up when the team grew to 20 people. And the systems that scaled the team to 20 needed updates to support a team of 50. To continue to grow, we needed to build something more sustainable.
Continuous deployment — and the changes required to support it — presented a major opportunity for reaching toward the scale we aspired to. In this article I’ll explore how we automated and streamlined our process to make our developers’ lives easier and empower faster iteration.
Throughout the last two years of accelerated growth, we’ve identified a few things that we could do to better support our team expansion.
Before optimizing the feature release process, here’s how things went for our earlier, smaller team when deploying new additions:
This wasn’t perfect, but it was an effective solution for a small team. However, the accelerated growth in the engineering team led to a sharp increase in the number of projects and also collaborators on each project. We began to notice several points of friction:
It was clear that things needed to change.
On the Site Reliability Engineering (SRE) team, we are fans of the GitOps approach, where Git is the single source of truth. So when the previously mentioned points of friction became more critical, we felt that all the tooling involved in GitOps practices could help us find practical solutions.
Additionally, these solutions would often rely on tooling we already had in place (like Kubernetes, or Helm for example).
GitOps is an operational framework. It takes application-development best practices and applies them to infrastructure automation.
The main takeaway is that in a GitOps setting, everything from code to infrastructure configuration is versioned in Git. It is then possible to create automation by leveraging the workflows associated with Git.
One such class of that automation could be “operations by pull requests”. In that case, pull requests and associated events could trigger various operations.
Here are some examples:
ArgoCD is a continuous deployment tool that relies on GitOps practices. It helps synchronize live environments and services to version-controlled declarative service definitions and configurations, which ArgoCD calls Applications.
In simpler terms, an Application resource tells ArgoCD to look at a Git repository and to make sure the deployed service’s configuration matches the one stored in Git.
The goal wasn’t to reinvent the wheel when implementing continuous deployment. We instead wanted to approach it in a progressive manner. This would help build developer buy-in, lay the groundwork for a smoother transition, and reduce the risk of breaking deploys. ArgoCD was an excellent step toward those goals, given how flexible it is with customizable Config Management Plugins (CMP).
ArgoCD can track a branch to keep everything up to date with the last commit, but can also make sure a particular revision is used. We decided to use the latter approach as an intermediate step, because we weren’t quite ready to deploy off the HEAD of our repositories.
The only difference from a pipeline perspective is that it now updates the tracked revision in ArgoCD instead of running our complex deployment scripts. ArgoCD has a Command Line Interface (CLI) that allows us to simply do that. Our deployment jobs only need to run the following command:
The developers’ workflow is left untouched at this point. Now comes the fun part.
Our biggest requirement for continuous deployment was to have some sort of safeguard in case things went wrong. No matter how much we trust our tests, it is always possible that a bug makes its way to our production environments.
Before implementing Argo Rollouts, we still kept an eye on the system to make sure everything was fine during deployment and took quick action when issues were discovered. But up to that point, this process was carried out manually.
It was time to automate that process, toward the goal of raising our team’s confidence levels when deploying new changes. By providing a safety net, of sorts, we could be sure that things would go according to plan without manually checking it all.
Argo Rollouts is a progressive delivery controller. It relies on a Kubernetes controller and set of custom resource definitions (CRD) to provide us with advanced deployment capabilities on top of the ones natively offered by Kubernetes. These include features like:

We were especially interested in the canary and canary analysis features. By shifting only a small portion of traffic to the new version of an application, we can limit the blast radius in case anything is wrong. Performing an analysis allows us to automatically, and periodically, check that our service’s new version is behaving as expected before promoting this canary.
Argo Rollouts is compatible with multiple metric providers including Datadog, which is the tool we use. This allows us to run a Datadog query (or multiple) every few minutes and compare the results with a threshold value we specify.
We can then configure Argo Rollouts to automatically take action, should the threshold(s) be exceeded too often during the analysis. In those cases, Argo Rollouts scales down the canary and scales the previous stable version of our software back to its initial number of replicas.

Each service has its own metrics to monitor, but for starters we added an error rate check for all of our services.
Remember when I mentioned replacing complex, project-specific deployment scripts with a single, simple command? That’s not entirely accurate, and requires some additional nuance for a full understanding.
Not only did we need to deploy software on different kinds of environments (staging and production), but also in multiple Kubernetes clusters per environment. For example, the applications composing the Gorgias core platform are deployed across multiple cloud regions all around the world.
ArgoCD and Argo Rollouts might seem to be magic tools, we actually still need some “glue” to make things stick together. Now because of ArgoCD’s application-based mechanisms, we were able to get rid of custom scripts and use this common tool across all projects. This in-house tool was named deployment conductor.
We even went a step further and implemented this tool in a way that accepts simple YAML configuration files. Such files allow us to declare various environments and clusters in which we want each individual project to be deployed.
When deploying a service to an environment, our tool will then go through all clusters listed for that environment.
For each of these, it will look for dedicated values.yaml files in the service’s chart’s directory. This allows developers to change a service’s configuration based on the environment and cluster in which it’s deployed. Typically, they would want to edit the number of replicas for each service depending on the geographical region.
This makes it much easier for developers than having to manage configuration and maintain deployment scripts.
This leads us to the end of our journey’s first leg: our first encounter with continuous deployment.
After we migrated all our Kubernetes Deployments to Argo Rollouts, we let our developers get acclimated for the next few weeks.
Our new setup still wasn’t fully optimized, but we felt like it was a big improvement compared to the previous one. And while we could think of many improvements to make things even more reliable before enabling continuous deployment, we decided to get feedback from the team during this period, to iterate more effectively.
Some projects introduced additional technicalities to overcome, but we easily identified a small first batch of projects where we could enable CD. Before deployment, we asked the development team if we were missing anything they needed to be comfortable with automatic deployment of their code in production environments.
With everyone feeling good about where we were at, we removed the manual step in our CI system (GitLab) for jobs deploying to production environments.
We’re still monitoring this closely, but so far we haven’t had any issues. We still plan on enabling continuous deployment on all our projects in the near future, but it will be a work in progress for now.
Here are some ideas for future improvements that anticipate potential roadblocks:
We’re excited to explore these challenges. And, overall, our developers have welcomed these changes with open arms. It helps that our systems have been successful at stopping bad deployments from creating big incidents so far.
While we haven’t reached the end of our journey yet, we are confident that we are on the right path, moving at the right pace for our team.

As you work with SQLAlchemy, over time, you might have a performance nightmare brewing in the background that you aren’t even aware of.
In this lesser-known issue, which strikes primarily in larger projects, normal usage leads to an ever-growing number of idle-in-transaction database connections. These open connections can kill the overall performance of the application.
While you can fix this issue down the line, when it begins to take a toll on your performance, it takes much less work to mitigate the problem from the start.
At Gorgias, we learned this lesson the hard way. After testing different approaches, we solved the problem by extending the high-level SQLAlchemy classes (namely sessions and transactions) with functionality that allows working with "live" DB (database) objects for limited periods of time, expunging them after they are no longer needed.
This analysis covers everything you need to know to close those unnecessary open DB connections and keep your application humming along.
Leading Python web frameworks such as Django come with an integrated ORM (object-relational mapping) that handles all database access, separating most of the low-level database concerns from the actual user code. The developer can write their code focusing on the actual logic around models, rather than thinking of the DB engine, transaction management or isolation level.
While this scenario seems enticing, big frameworks like Django may not always be suitable for our projects. What happens if we want to build our own starting from a microframework (instead of a full-stack framework) and augment it only with the components that we need?
In Python, the extra packages we would use to build ourselves a full-fledged framework are fairly standard: They will most likely include Jinja2 for template rendering, Marshmallow for dealing with schemas and SQLAlchemy as ORM.
Not all projects are web applications (following a request-response pattern) and among web applications, most of them deal with background tasks that have nothing to do with requests or responses.
This is important to understand because in request-response paradigms, we usually open a DB transaction upon receiving a request and we close it when responding to it. This allows us to associate the number of concurrent DB transactions with the number of parallel HTTP requests handled. A transaction stays open for as long as a request is being processed, and that must happen relatively quickly — users don't appreciate long loading times.
Transactions opened and closed by background tasks are a totally different story: There's no clear and simple rule on how DB transactions are managed at a code level, there's no easy way to tell how long tasks (should) last, and there usually isn't any upper limit to the execution time.
This could lead to potentially long transaction times, during which the process effectively holds a DB connection open without actually using it for the majority of the time period. This state is known as an idle-in-transaction connection state and should be avoided as much as possible, because it blocks DB resources without actively using them.
To fully understand how database access transpires in a SQLAlchemy-based app, one needs to understand the layers responsible for the execution.

At the highest level, we code our DB interaction using high-level SQLAlchemy queries on our defined models. The query is then transformed into one or more SQL statements by SQLAlchemy's ORM which is passed on to a database engine (driver) through a common Python DB API defined by PEP-249. (PEP-249 is a Python Enhancement Proposal dedicated to standardizing Python DB server access.) The database engine communicates with the actual database server.
At first glance, everything looks good in this stack. However there's one tiny problem: The DB API (defined by PEP-249) does not provide an explicit way of managing transactions. In fact, it mandates the use of a default transaction regardless of the operations you're executing, so even the simplest select will open a transaction if none are open on the current connection.
SQLAlchemy builds on top of PEP-249, doing its best to stay out of driver implementation details. That way, any Python DB driver claiming PEP-249 compatibility could work well with it.
While this is generally a good idea, SQLAlchemy has no choice but to inherit the limitations and design choices made at the PEP-249 level. More precisely (and importantly), it will automatically open a transaction for you upon the very first query, regardless whether it’s needed. And that's the root of the issue we set out to solve: In production, you'll probably end up with a lot of unwanted transactions, locking up on DB resources for longer than desired.
Also, SQLAlchemy uses sessions (in-memory caches of models) that rely on transactions. And the whole SQLAlchemy world is built around sessions. While you could technically ditch them to avoid the idle-in-transactions problem with a “lower-level” interface to the DB, all of the examples and documentation you’ll find online uses the “higher-level” interface (i.e. sessions). It’s likely that you will feel like you are trying to swim against the tide to get that workaround up and running.
Some DB servers, most notably Postgres, default to an autocommit mode. This mode implies atomicity at the SQL statement level — something developers are likely to expect. But they prefer to explicitly open a transaction block when needed and operate outside of one by default.
If you're reading this, you have probably already Googled for "sqlalchemy autocommit" and may have found their official documentation on the (now deprecated) autocommit mode. Unfortunately this functionality is a "soft" autocommit and is implemented purely in SQLAlchemy, on top of the PEP-249 driver; it doesn't have anything to do with DB's native autocommit mode.
This version works by simply committing the opened transaction as soon as SQLAlchemy detects an SQL statement that modifies data. Unfortunately, that doesn't fix our problem; the pointless, underlying DB transaction opened by non-modifying queries still remains open.
When using Postgres, we could in theory play with the new AUTOCOMMIT isolation level option introduced in psycopg2 to make use of the DB-level autocommit mode. However this is far from ideal as it would require hooking into SQLAlchemy's transaction management and adjusting the isolation level each time as needed. Additionally, "autocommit" isn't really an isolation level and it’s not desirable to change the connection's isolation level all the time, from various parts of the code. You can find more details on this matter, along with a possible implementation of this idea in Carl Meyer's article “PostgreSQL Transactions and SQLAlchemy.”
At Gorgias, we always prefer explicit solutions to implicit assumptions. By including all details, even common ones that most developers would assume by default, we can be more clear and leave less guesswork later on. This is why we didn't want to hack together a solution behind the scenes, just to get rid of our idle-in-transactions problem. We decided to dig deeper and come up with a proper, explicit, and (almost) hack-free method to fix it.
The following chart shows the profile of an idle-in-transaction case over a period of two weeks, before and after fixing the problem.

As you can see, we’re talking about tens of seconds during which connections are being held in an unusable state. In the context of a user waiting for a page to load, that is an excruciatingly long period of time.
SQLAlchemy works with sessions that are, simply put, in-memory caches of model instances. The code behind these sessions is quite complex, but usage boils down to either explicit session reference...
...or implicit usage.
Both of these approaches will ensure a transaction is opened and will not close it until a later ***session.commit()***or session.rollback(). There's actually nothing wrong with calling session.commit() when you need to explicitly close a transaction that you know is opened and you’re done with using the DB, in that particular scope.
To address the idle-in-transaction problem generated by such a line, we must keep the code between the query and the commit relatively short and fast (i.e. avoid blocking calls or CPU-intensive operations).
It sounds simple enough, but what happens if we access an attribute of a DB model after session.commit()? It will open another transaction and leave it hanging, even though it might not need to hit the DB at all.
While we can't foresee what a developer will do with the DB object afterward, we can prevent usage that would hit the DB (and open a new transaction) by expunging it from the session. An expunged object will raise an exception if any unloaded (or expired) attributes are accessed. And that’s what we actually want here: to make it crash if misused, rather than leaving idle-in-transaction connections behind to block DB resources.
When working with multiple objects and complex queries, it’s easy to overlook the necessary expunging of those objects. It only takes one un-expunged object to trigger the idle-in-transaction problem, so you need to be consistent.
Objects can't be used for any kind of DB interaction after being expunged. So how do we make it clear and obvious that certain objects are to be used in within a limited scope? The answer is a Python context manager to handle SQLAlchemy transactions and connections. Not only does it allow us to visually limit object usage to a block, but it will also ensure everything is prepared for us and cleaned up afterwards.
The construct above normally opens a transaction block associated to a new SQLAlchemy session, but we've added a new expunge keyword to the begin method, instructing SQLAlchemy to automatically expunge objects associated with block's session (the tx.session). To get this kind of behavior from a session, we need to override the begin method (and friends) in a subclass of SQLAlchemy's Session.
We want to keep the default behavior and use a new ExpungingTransaction instead of SQLAlchemy's SessionTransaction, but only when explicitly instructed to by the expunge=True argument.
You can use the class_ argument of sessionmaker to instruct it to build am ExpungingSession instead of a regular Session.
The last piece of the puzzle is the ExpungingTransaction code, which is responsible for two important things: committing the session so the underlying transaction gets closed and expunging objects so that we don't accidentally reopen the transaction.
By following these steps, you get a useful context manager that forces you to group your DB interaction into a block and notifies you if you mistakenly use (unloaded) objects outside of it.
What if we really need to access DB models outside of an expunging context?
Simply passing models to functions as arguments helps in achieving a great goal: the decoupling of models retrieval from their actual usage. However, such functions are no longer in control of what happens to those models afterwards
We don't want to forbid all usage of models outside of this context, but we need to somehow inform the user that the model object comes “as is,” with whatever loaded attributes it has. It's disconnected from the DB and shouldn't be modified.
In SQLAlchemy, when we modify a live model object, we expect the change to be pushed to the DB as soon as commit or flush is called on the owning session. With expunged objects this is not the case, because they don't belong to a session. So how does the user of such an object know what to expect from a certain model object? The user needs to ensure that she:
To safely and explicitly pass along these kind of model objects, we introduced frozen objects. Frozen objects are basically proxies to expunged models that won't allow any modification.
To work with these frozen objects, we added a freeze method to our ExpungingSession:
So now our code would look something like this:
Now, what if we want to modify the object outside of this context, later on, (e.g. after a long-lasting HTTP request)? As our frozen object is completely disconnected from any session (and from the DB), we need to fetch a warm instance associated to it from the DB and make our changes to that instance. This is done by adding a helper fetch_warm_instance method to our session...
...and then our code that modifies the object would say something like this.
When the second context manager exits, it will call commit on tx.session, and changes to my_model will be committed to the DB right away.
We now have a way of safely dealing with models without generating idle-in-transaction problems, but the code quickly becomes a mess if we have to deal with relationships: We need to freeze them separately and pass them along as if they aren’t related. This could be overcome by telling the freeze method to freeze all related objects, recursively walking the relationships.
We'll have to make some adjustments to our frozen proxy class as well.
Now, we can fetch, freeze, and use frozen objects with any preloaded relationships.
While the code to access the DB with SQLAlchemy may look simple and straightforward, one should always pay close attention to transaction management and the subtleties that arise from the various layers of the persistence stack.
We learned this the hard way, when our services eventually started to exhaust the DB resources many years into development.
If you recently decided to use a software stack similar to ours, you should consider writing your DB access code in such a way that it avoids idle-in-transaction issues, even from the first days of your project. The problem may not be obvious at the beginning, but it becomes painfully apparent as you scale.
If your project is mature and has been in development for years, you should consider planning changes to your code to avoid or to minimize idle-in-transaction issues, while the situation is still under control. You can start writing new idle-in-transaction-proof code while planning to gradually update existing code, according to the capacity of your development team.

Like any major topic in your company, your compensation policy should reflect your organizational values.
At Gorgias, we created a compensation calculator that reflected ours, setting salaries across the organization based on 3 key principles:
Since the beginning, we applied the first two: Each of our employees was granted data-driven stock options that beat the market average.
However, we were challenged internally: Our team members asked how much they would make if they switched teams or if they got promoted.
This led to the implementation of our third key principle, as we shared the compensation calculator with everyone at Gorgias and beyond: See the calculator here.
This was not a small challenge. We’re sharing our process in hopes that we can help other companies arrive at equitable, transparent compensation practices.
First, let’s get back to how we built the tool. We had to decide which criteria we wanted to take into account. Based on research articles and benchmarks on what other companies did before, we decided that our compensation model would be based on 4 factors: position, level, location, and strategic orientation.
If we had to sum it up all briefly, our formula looks like this:
Average of Data (for the position at defined percentile & Level) x Location index

This is the job title someone has in the company. It looks simple, but it can be challenging to define! Even if the titles don’t really vary from one company to another, people might have different duties, deal with much bigger clients or have more technical responsibilities. Sometimes your job title or position doesn’t match the existing databases.
For some of these roles, when we thought that our team members were doing more than average in the market, we crossed some databases to get something closer to fairness.
To assess a level we defined specific criteria in our growth plan for each job position. It is, of course, linked to seniority, but that is not the primary factor. When we hire someone, we evaluate their skills using specific challenges and case studies during our interview processes.
Depending on the databases you’ll find beginner, intermediate, expert, which we represent as L1, L2, L3, etc.We decided to go with six levels from L1 to L6 for individual contributors and six levels in management from team lead to C-level executive.
Our location index is based on the cost of living in a specific city (we rely on Numbeo for instance) and on the average salary for a position we hire (we use Glassdoor). Some cities are better providers of specific talents. By combining them, we get a more accurate location index.
When we are missing data for a specific city, we use the nearest one where we have data available.
Our reference is San Francisco, where the location index equals 1, meaning it’s basically the most expensive city in terms of hiring. For others, we have an index that can vary from 0.29 (Belgrade, Serbia) to 0.56 (Paris, France) to 0.65 (Toronto, Canada) etc. We now have 50+ locations in our salary calculator — a necessary consideration for our quickly growing, global team of full-time employees and contractors.

We rely on our strategic orientation to select which percentile we want to use in our databases. When we started Gorgias we were using the 50th percentile. As we grew (and raised funds), we wanted to be 100% sure that we were hiring the best people to build the best possible company.
High quality talent can be expensive (but not as expensive as making the wrong hires)! Obviously, we can’t pay everyone at the top of the market and align with big players like Google, but we can do our best to get close.
Since having the best product is our priority we pay our engineering and product team at the 90th percentile, meaning their pay is in the top 10% of the industry. We pay other teams at the 60th percentile.
Some other companies take into account additional criteria, such as company seniority. We believe seniority should reflect in equity, rather than in salary. If you apply seniority in the company index on salaries, eventually some of your team members will be inconsistent with the market. Those employees may stay in your company only because they won’t be able to find the same salary elsewhere.
Data is at the heart of our company DNA.
Where should you find your data? Data is everywhere! What matters most is the quality.
We look for the most relevant data on the market. If the database is not robust enough, we look elsewhere. So far we have managed to rely on several of them: Opencomp, Optionimpact, Figures.hr, and Pave are some major datasets we use for compensation. We’re curious and always looking for more. We’ll soon dig into Carta, Eon, and Levels. The more data we get, the more confident we are about the offers we make to our team.

Once we have the data, we apply our location index. It applies to both salaries and equity.
To build our equity package, we use the compensation and we then apply a “team” multiplier and a “level” multiplier. Those multipliers rely on data, of course. We’re using the same databases mentioned above and also on Rewarding Talent documentation for Europe.
As we mentioned above, once our tool was robust enough, we shared it internally.
To be honest, checking and checking again took longer than expected. But we all agreed that we’d rather release it to good reactions than rush it and create fear. We postponed the release for one month to check and double-check the results..
For the most effective release, we decided to do two things:
Overall, the reactions have been great. People loved the transparency and we got solid feedback.
We released the new calculator in September 2021, and overall we’re really happy with the response. We also had positive feedback from the update this month.
Let’s see how it goes with time.
Let’s be humble here: It’s only the beginning. It’s a Google Sheet. Of course, we’ll need to iterate on it.
In the meantime, you can check out the calculator here.
So far we’ve made plans to review the whole grid every year. However, now that it’s public within the teams, we can collect feedback and potentially make some changes. Everyone can add comments as they notice potential issues.
The next step for us is to share it online with everyone, on our website, so that candidates can have a vision of what we offer. We hope we’ll attract more talent thanks to this level of transparency and the value of our compensation packages.

I come from the world of physical retail where building a bond was more straightforward. We often celebrated wins with breakfast and champagne (yes, I’m French!) or by simply clapping our hands and making noise of joy.
We would also have lunch together every day, engaging in many informal discussions.
Of course, it bonded us! I knew my colleagues’ dog names and their plumber problems, and I felt really close to many of them.
Employee engagement is one of the primary drivers of productivity, work quality, and talent retention. When I joined Gorgias, where we have a globally distributed team, I wondered how you create the sense of belonging that drives that engagement
Like many companies now, our workforce is distributed. But at Gorgias, it’s a truly global affair: Our team lives in 17 countries, four continents, and many different time zones, which can be challenging.
And yet, I believe Gorgias culture is truly amazing and even better than the one I used to know.
I realize that we achieved that by relying on the critical ingredients of a strong relationship

By repeating these strong moments, you can make the connection between people stronger as well. The stronger the connection, the stronger the engagement.
Speaking of a strong engagement, Gorgias’s eNPS (employee Net Promoter Score) is 50. How is this possible? Well, what’s always quoted as one of our main strengths is the company culture, and how it connects our employees.
Let’s take it further by exploring five actionable steps we have taken to make that happen.
While some would push back against events like these falling under the purview of the People team, they are important for building strong culture, team cohesion, and employee happiness — all areas that are definitely part of our directive.
Here’s what you need to know to bring these summits to your organization.

As the name states, it’s a virtual event where the whole company connects.
It’s not mandatory, but it is highly recommended to attend because it’s fun and you learn many things.
It’s a mix of company updates, fun moments, and inspiring sessions. Each session is short, to let everyone the opportunity to breathe.
Typically we have three kinds of sessions:
Due to timezones, some sessions don’t include every country.


Our last virtual summit cost us roughly $13,000, which means $65 per head. Here’s the breakdown:
The first thing you might already have in mind is: It takes time! And you’re right.
The more we grow, the more challenging it becomes to organize these events.
I believe we’ll eventually need to have a dedicated event manager for all of our physical and virtual events. I want to have them within my team, and I 100% believe it’s worth it.
Another challenge can be technical difficulties with your event software choice, so make sure that you find a reliable platform that suits your needs.
Our team is a mix of hybrid and full-remote workers.
Since we don’t want the full-remote people to become disconnected, we highly encourage them to join the nearest hub once a quarter.
And when they do, we organize some happy hours, games or movie nights. Those face-to-face activities help create bonds between employees. It’s simple and doesn’t require a lot of organization, but it creates an incredible moment every time the remote teams join. We call them Gorgias Weeks.
We were fortunate to be able to organize our company offsite and gather a massive part of the crew together in October 2021.
The pandemic created doubt and additional points of stress, but looking back I’m so glad we were able to create an opportunity for everyone to meet in person.
We asked everyone to bring a health pass — full vaccination or PCR test — and we picked a location that allowed for a lot of outdoor activities.
We made sure the agenda for the two days was not too busy. As with our virtual summit, it was a balance of company alignment, learning, and fun. We made sure people had enough free time to relax, talk to each other, play games, or play sports.
This company offsite is surely an essential and strong moment for us and it helps create strong bonds and great memories.

We encourage every team to organize their own offsite for team-building purposes. Since people don’t meet a lot physically, having these once a year is great!
We let each team lead own it. They pick up the location and the agenda. Then, we provide guidelines with the budget.
Needless to say, it helps build stronger bonds and great memories.
In my experience, it was quite tough to create those moments internally with the team. That’s why we decided to start our team meeting with a fun activity of 10-15 minutes, where we are able to share more than just work.
Every week, there is a different meeting owner who has to come up with new fun activities and games. Starting the meeting with this kind of ice-breaking activity brings powerful energy, and people are more engaged and effective in the sessions. I would recommend it to everyone, especially to those who think, “We already have so many things to review in those weekly meetings, we don’t have time for that.” Try it once, you’ll see how the energy and productivity are different afterward.
On top of that, I also believe tools that encourage colleagues to randomly meet together are great. On our side we use Donut. It gives a weekly reminder that encourages employees to make it to their meeting with a colleague.
Overall, we’ve organized six virtual summits, four company retreats, three Gorgias weeks, and hundreds of virtual coffee and fun meetings.
At the beginning there were only 30 people in the company — now there are 200 of them. As I mentioned, it’s becoming more and more challenging to organize these meetups, but it’s also the most exciting part: making sure the next summit is better than the previous one!
Of course, I’m aware that employee fulfillment and connection are not the only ingredients for retention. But they are key ingredients and shouldn’t be forgotten, especially as we all become more remote.
It’s a worthy investment to organize these events and allocate resources to them, because it makes everyone at Gorgias feel included and connected. And I have no doubt, now, that it’s part of our responsibilities in People Ops.


