As any game developer will tell you, turning a game concept into a playable product that’s been trialed and tested to the point where you can eventually launch it is a long and often painful process. It’s inevitable that you’ll encounter some challenges along the way, especially if the game you’re working on has multiplayer components.

Thankfully, a lot of these challenges can be easily avoided, or the disruption caused by them minimized at the very least, by knowing what to look out for at four key moments: the development cycle, the moments just before launch, the launch period and finally, post-launch.

Mathieu Duperré, CEO at Edgegap, breaks down the specific challenges relating to these key moments and how best to plan for them.

Mathieu Duperré, CEO, Edgegap

Main development cycle

There are plenty of challenges you can encounter during the main development cycle, but some of the most common ones stem from QA teams not getting regular access to a game build, poorly built and rushed CI/CD pipelines, trying to build too many services in-house rather than getting external support (netcode, game engine, backend services etc.) or saving money through open-source projects, and waiting until post-launch to look at important features such as crossplay.

If any of those sound familiar, it’s time to reevaluate your DevOps methodology. Are you following it to the letter or going off course at some point? If a lack of resources is responsible for you straying from a defined DevOps path, you should consider automating some of your processes to make your team and development cycle more efficient. Just make sure that any tools you use to facilitate this – especially if they’re open source projects – have 24/7 support and aren’t likely to be dropped.

Common mistakes you can avoid right before launch

Only selecting a small number of regions in the public Cloud for your multiplayer game but pursuing a worldwide launch is a latency-ridden recipe for disaster, especially if your game proves a hit on launch. You might be surprised by the number of studios that completely forget to consider data protection regulation laws such as GDPR, CCPA, and PIPEDA, which are a necessity for any businesses that collect data from their customers (hint: if you’re a game studio this means you!).

But the most common and often most dangerous mistake that multiplayer developers make at this stage? Underestimating their CCU (concurrent users worldwide) with a low percentage buffer of something like 10%. This is usually why so many players experience crashes on launch day. Similarly, just because you’ve got your own data centre doesn’t mean you won’t need external network support, especially if you’ve made assumptions on the scalability of your game based on a test of 1000 players…

So what can you do to avoid these issues? While it’s impossible to predict with pinpoint precision the number of players and their locations you’ll be expecting at launch, it’s important to be flexible. This means overpredicting and having a plan for rapid scaling if your game is an overnight success.

But when you’re planning for this, don’t get forced into signing long contracts. Be mindful of your budget. There are plenty of network partners out there that offer pay-as-you-go plans, which are perfect if you find yourself needing to downsize in a couple of months after launch (hopefully not!) You don’t want to be wasting money on underused servers.

Overcoming hurdles on launch day

Congratulations, you’ve made it to launch day! Bad news: this is often where the hard work begins, as issues you weren’t expecting start to rear their heads. You might get hit with a barrage of Steam reviews and Twitter messages making you aware of matchmaking issues (don’t ignore these). If you haven’t planned accordingly, this might also be the moment that you realise you don’t have any visibility on the player experience other than the issues they’re directly submitting to you.

Now that you’ve launched your game, this is also the moment where you realise how many servers you need to support your players. Again, depending on your planning, you might have reserved 1000 but only need 100, or vice versa.

Fixing some of these can be a relatively straightforward process. First, you need to get real-time visibility and logs for 100% of your multiplayer flow to understand how well and why things are working or not working. Never assume your Cloud provider has unlimited scalability; most have a limited number of servers, highlighting the importance of finding a flexible provider.

You’re also going to want a 24/7 support plan for when your game is live so you can quickly action fixes to minimize any potential disruptions. You’ll also need to pick a DDoS mitigation solution that’s cost-effective with a low-server density. Finally, make sure you use a server orchestrator so your game can scale quickly if it needs to. An orchestrator will also help you capture vital data that can be integrated as part of your observability pipeline.

Post-launch issues

You might be tempted to improve the player experience with the introduction of new mods and elements into matchmaking modes, but even the most minor updates have the potential to introduce major bugs into your game. Fixing these issues can take weeks, or sometimes even months.

And for all of the issues related to servers we’ve mentioned so far, it’s important to remember that if you haven’t planned accordingly, any major updates to game servers will result in maintenance, resulting in downtime for your players. If they have to put up with this too often, they could abandon your game entirely.

Is there a way around these issues? Yes. You need to choose a backend that supports multi-versioning for A/B testing and rolling updates without any outages. Nobody likes it when their favourite game is unplayable due to updates. You can also automate your production pipeline, including the deployment of quick fix updates and upgrades, to minimize any human errors.

Finally, you can make sure you’re working with network and platform providers that don’t require you to have a substantial in-house team dedicated to DevOps, LiveOps and Engineering. Sure, you may want a staff member or two to oversee these processes and report on them, but a lot of this work can be outsourced to specialists, and it often makes more sense to do this, especially for smaller studios with tighter budgets.

Planning is everything

Regardless of whether you’re evaluating your DevOps or road mapping your next game release, it’s important to have plans in place for every possible challenge that you might encounter along the way. As I’ve hopefully shown, a lot of these can be easily avoided through the support of trusted external partners – just make sure you do your research and have scalability at the heart of what you’re doing (but making a great game should obviously be above that!)


Got a story you'd like to share? Reach us at [email protected]

Tags: