Upgrade to PRO for Only $50/Year—Limited-Time Offer! 🔥

Mobile Games are Living Organisms, Too

Mobile Games are Living Organisms, Too

Modern games are services and run on a plethora of devices. So, operating a modern game brings many of the challenges and surprises of distributed systems to new environments. However, unlike a traditional distributed system where developers have control over all nodes, they now deal with customer-supplied devices that are almost entirely outside their control.

Armin Ronacher

March 21, 2022
Tweet

More Decks by Armin Ronacher

Other Decks in Technology

Transcript

  1. Application Performance Monitoring We build a service (sentry) to monitor

    applications. In short: we tell you when your app runs slow or crashes. In practical terms it means we develop an SDK to be embedded into an application and we build and operate a service that receives these crash and performance reports.
  2. 1 What we mean by mobile game 2 What this

    has to do with organisms 3 Stories from the Real World Agenda
  3. A Mobile Game In the context of this talk a

    mobile game refers to any game with a large installation base of independent devices that are able to communicate to external services but can operate independently of these services.
  4. In Concrete Terms Large installation base: Measured in terms of

    concurrent devices or users Independent devices: That means outside of the developer’s general control Communicate to External Services: Devices check in with central services (config update, metric etc.) Operate independently: Game can run offline, does not require central services.
  5. This also applies to • Console and PC games with

    a large installation base • A range of in-browser experiences
  6. Emergent Behavior When devices talk to central services they can

    indirectly influence each other. The emergent behavior can be hard to understand. Behavior is no longer deterministic.
  7. Distributed Killswitch • Once emergent behavior is in progress, controlling

    it can be tricky due to the distributed nature • Old affected clients might not have a functioning kill-switch yet and are dependent on the server restoring the service to normal levels
  8. Service -> Device 1. Application on start loads a config

    packet from central service 2. Failure upon parsing -> crash on startup
  9. Service -> Device -> Service 1. Application upon loading main

    menu fetches update file 2. Encounters network error: retries 3. Partial outage causes all active players in main menu to reload 4. Increases server load worsening the issue
  10. Service -> Device -> Service -> Device 1. Application upon

    loading main menu fetches config file 2. Encounters network error: retries 3. Partial outage causes all active players in main menu to reload 4. Increases server load worsening the issue 5. Load balancer produces an error response 6. Game accidentally parses error response 7. Game persists broken config
  11. Device -> Service -> Device 1. User inputs a null

    byte into a profile info 2. Other players joining into that player’s game crash
  12. A few years ago, in Sentry SDK Land 1. Sentry

    iOS SDK installs signal handler 2. Upon crash persist crash file to “disk” 3. Upon reload, try to send crash. If successful -> delete, if failed: keep what’s the problem?
  13. A slow death • Crash reports sent from oldest to

    newest • Failures are not dropped • As time goes on, some customers only get weeks old crash reports • Fleet of devices are caching older and older crashes • Exaggerated by running into rate limits upon submission
  14. Updates take time • We patched the client to delete

    old reports, but it takes time to update • Changed the server to lie about accepting reports for mobile clients • Within a few days the backlog of ancient crash reports cleared
  15. Abuse Protection vs Back-Pressure Control When clients send too much

    data, our ingestion system replies with 429 Includes Retry-After header that tells SDKs for how long they must slow down Our global load balancer has an abuse protection A chrome extension started pushing through the abuse level
  16. When Abuse Protection Worsens Abuse limit protection did not reply

    with 429 As traffic from an abuse project was creeping up, the moment it crashed through the abuse limit the Retry-After header disappeared and the traffic was going up faster and faster crashed through the global rate limits here started emitting retry-after here for abuse tier
  17. Perspective Our protection pushed one single project into sending a

    magnitude more traffic past the point of rejection than the rest of all projects combined. Exaggerated by this SDK not having a fallback retry-after compared to other SDKs.
  18. Centralized User Frustration 1. Facebook Authentication library fetches config from

    server 2. Server serves malformed config 3. Applications using this library crash globally
  19. Knock-on Effects 1. Lots of confused users take to support

    of apps using the library 2. Lots of crash reports from different apps come to crash reporting services 3. Many user restarts of apps increase traffic to services queried on startup
  20. User Generated Content Can be Dangerous Profile info is frequently

    denormalized, cached and synched to other players. Format strings are messy and easy to misuse. A user with manipulated information that is synchronized to other players can cause crashes.
  21. Left 4 Dead Corrupted Sprays Example: in Left 4 Dead

    users can use custom sprays (images) that are placed as textures. When other players encounter such sprays their game client crashes. The act of sharing a game session with such a griefing player can cause disruption.
  22. Towards Replication Replication is worse. Replication happens when the questionable

    payload can spread between game sessions. This can even happen for bugs: World of Warcraft’s Corrupted Blood incident. A status change was able to spread out of a restricted game session via player pets. Caused the status effect to spread like a virus. Required multiple fixes and a month to fully remove.
  23. The Fleet is Bizarre The larger the deployment, the more

    weird devices show up. • Completely wrong on-device clocks • Bizarre memory corruption • Malware interfering
  24. Gibberish breaks Product Features • A string identifying the release

    was randomly corrupted • Created garbage release identifiers on the server
  25. Outline • 429 backoff vs queue (backpressure, load shedding) ◦

    Offline caching + retry - Sentry 2017 SDK bug? ◦ Behavior caused by 429 in general • Facebook Sign-In • “Infected” but non replicating player • Worms • State machine stuck, JSON bad, refresh JSON every frame • Android is its own organism. Due to fragmentation and custom builds ◦ Thinking about those garbage data from China. Hacks in games that break the apks. Rooted devices