Those which come through our recommended scrobbling API (Scrobble 2.0), which are part of a single long lived session.Scrobbles themselves also break down into two kinds: API requests where a user is logging in, creating a session (perhaps for scrobbling or other logged-in features).Our API traffic breaks down broadly into three kinds: That would not be acceptable! Tailoring the approach to the types of traffic we serveĪt this point it’s worth examining the types of API traffic we receive, and adjusting the strategy in a more nuanced way. With tens of thousands of writes per second, it would not be possible to take it offline and even the time taken to deploy a configuration change (were we to try to make the switch to the new Redis “instantaneously” with a configuration change to the app) could cause some users scrobbles to be lost, as they’re unable to create a session to submit them through. Use the cloud database directly - API app creation re-enabled Redis Do it twice - sometimes, you’ll need to add extra steps to tear down data you created the first time, to ensure the process is truly repeatable at the time you need it to go smoothly. It’s also extremely valuable to do a dry run, with a detailed set of repeatable instructions, to find out how long this process will take. You may need to make some manual additions to the process to ensure the correct permissions on your tables - it’s worth examining all the queries your apps do to check you have all the required permissions covered! So you have to study the options to pg_dump to make sure you are dumping the structure and data, but nothing extra such as the user accounts. One point to note about dumping and restoring Postgres from your own deployment into a managed system (such as Google CloudSQL) is that you probably want the database users to be managed by Google Cloud - and that means they will not be part of your database dump, and the login credentials will likely be different. The migration time here was under an hour - not so bad. The strategy we chose here was to make the API database read-only, then export the data and import it into the new instance in the cloud, before pointing both new and old API stacks to the new database. Fortunately for us, all API keys and application data exist in the Redis cache, and so the consequence of turning Postgres off entirely is limited: for a period of time, you would not be able to register a new API key (as a developer), but other than that there is no user visible impact. However, neither of them support having more than one instance which is the Primary - it’s only possible to write to one of them, meaning that at first, the new deployment must be configured to access the database and cache of the old deployment. One of the key challenges to overcome is that both the caching layer we use (Redis) and the database (Postgres) support replication, to keep two copies in different locations in sync. Failure to share the session cache could cause a user to be unexpectedly logged out of the app they’re using - perhaps more than once. So, both the old and new deployments have to serve the same data, and a log-in through one entrypoint must create a session that works at either entrypoint. For some period of time - perhaps even days - traffic will still be sent via the old address. But those familiar with DNS will know that a change like this is not instantaneous. It arbitrates between anonymous and logged-in features, and it has to accept users’ scrobbles quickly and reliably, whilst rejecting API abuse and ensuring that the service provided to apps is fast and fair.įrom the outside, the main visible change is that the DNS entry for ws. has to change to a new IP address. Our API Filter application deals with all of the methods on our API page, as well as the legacy Scrobble API, which is still actively used by some third party apps. The vast majority of this traffic comes through single URL: ws. the data which enables users to be logged in within an app) a cache, accessed thousands of times per second, to support API Sessions (i.e.a database of API keys and applications.The API speaks to many of the same backends as the web pages on the site, but it has a few aspects which are specific to the API itself: But, for several reasons, this is tricky! Structure of our API service The aim was to make the change in a way that users would not even notice. One of the more difficult challenges faced when moving the Last.fm infrastructure into the cloud was to move our live API traffic, which receives tens of thousands of requests per second, with no disruption to the millions of users who use it on a daily basis.
0 Comments
Leave a Reply. |