What to Look for at RubyConf 2018: Jeremy Hanna and Molly Struve Previewed Their Talks at 8th Light

What to Look for at RubyConf 2018: Jeremy Hanna and Molly Struve Previewed Their Talks at 8th Light

Nicole Carpenter
Nicole Carpenter

November 13, 2018

posts/2018-11-13-what-to-look-for-at-rubyconf-2018/rubyconf-preview-social.jpg

RubyConf 2018 kicks off today in downtown Los Angeles. Since 2001, RubyConf has gathered the global Ruby community to discuss emerging trends and share insights. 8th Light recently hosted two of the conference speakers from Kenna Security, Software Engineer, Jeremy Hanna, and Site Reliability Engineer, Molly Struve, before they hit the big stage in LA. They presented their talks for both ChicagoRuby and 8th Light University audiences at 8th Light’s Chicago office.

Read on to learn more about the concepts and highlights from these not-to-miss presentations so you have the inside scoop during networking and Q&A sessions.

Jeremy Hanna on Resilient System Design

Jeremy’s talk, entitled “Uncoupling Systems”, walks us through some of the insights he gained while building out services for scaling his company’s vulnerability assessment software.

Jeremy noted how, as developers, we spend a lot of time thinking about our code’s design. We write our code with the goal of everything fitting neatly together in order to create extensible, resilient applications. When designing a system from an architectural perspective, he suggested that we apply those same considerations.

“Treat your services like objects and design for them,” he urged. “Everything has dependencies and trade-offs to account for.”

Here are some of the juiciest takeaways from Jeremy’s talk.

  1. You don’t want to scale your system until you find that it is actually necessary to do so. Don’t be afraid to hold off on making the system distributed, creating the simplest solution that works first, and monitoring where the pain points are.

  2. Once you have decided that scaling is necessary, make a plan to talk about some of the trade-offs that you will encounter. There are a lot of assumptions that we have to make when dealing with external services for which we do not have full control. We assume that the services are available when we need them, that we can rely on the network to be available, that data will arrive in a timely, uncorrupted manner. We also assume that services are still where we thought they were, that we have enough bandwidth to handle the data, and that the data is in the same format that we received it last. If one of these assumptions fails, there should be a planned alternative procedure in place.

  3. Think about your caching strategy. Making calls outside of the network is extremely expensive in terms of both latency and potential failure, and utilizing web server memory is often a much quicker and more reliable option. Database queries that are slow or occur frequently are good candidates for caching. Invalidate your cache frequently and on a consistent basis to reduce the occurrences of stale data.

  4. Learn to load balance effectively with queues. You are sending messages. You want to monitor how many messages you are sending so you don’t overload a server.

  5. Encapsulate data changes inside of asynchronous jobs. Data can be stored and updated locally for the users. If for some reason the network is unavailable at the time of the change, the stored data can be held until the Internet allows it to be sent along.

Every application is different, and there is not a one-size-fits-all solution for scaling. Understand your system’s needs and evaluate the costs and benefits of the changes you will be making.

For further reference, check out these resources that Jeremy recommended:

Molly Struve on How Your Fastest Queries May Actually Be Slowing You Down

posts/2018-11-13-what-to-look-for-at-rubyconf-2018/molly-presentation-preview.jpg

Molly demonstrated in her talk “Cache is King: Get the Most Bang for Your Buck from Ruby” how multiple fast queries can wreak just as much havoc on an application’s performance as a few big, slow queries. The reason for this, she explained, is that even if a query takes milliseconds to perform, running the same query over and over again takes time to process.

Molly walked us through some of the work that she has been performing in her role as Site Reliability Engineer at Kenna and how they sought to improve the performance of their application.

They started with the obvious steps of identifying big, cumbersome queries and took steps to optimize them by:

  • adding indexes where they were missing
  • using SELECT statements to avoid n+1 queries
  • processing in small batches to ensure they were not pulling too many records from the database at once
  • rewriting Elasticsearch queries that were timing out to ensure they finished successfully

Even with all of those improvements, they were still seeing high loads across all of their data stores and slower-than-expected data processing.

Once they realized that the problem was the quantity of their datastore hits, rather than the quality of their queries, they went to work mitigating the problem. Here are some of the suggestions Molly presented that worked for them:

  1. Instead of making individual calls for each bit of data and overloading the database, use tools like bulk serialization to process the data together and reduce the number of calls made to the datastore.

  2. Eliminate unnecessary requests by using a locally referenced hash cache versus making multiple calls to an external service. This works well for quick GET requests such as querying for database indexes.

  3. Have a general understanding of the capabilities for which your framework allows. In Ruby, for example, some gems provide local caching and helper methods to access the cache. When adding a new gem to an application, consider setting it up by hand in a console to see how it is configured and to gain a deeper understanding of how it works.

  4. Use database guards to prevent unnecessary calls to the database and never assume that your gem or framework is not making a database request when it is asked to process an empty dataset. Utilize logging to understand how your gems, framework, and related services are interacting with your datastores.

  5. Sometimes, it is appropriate to remove datastore hits completely. Particularly when you have a fast-growing and evolving application, it is important to periodically take inventory of all of the tools that you are using to ensure that they are still needed.

Every datastore hit counts, and these strategies allow us to decrease the load on our database to improve performance. It does not matter how fast your queries are, if you are multiplying them by millions, it is going to have an impact on your performance. Molly presented this analogy:

“You wouldn’t throw dollar bills in the air just because a single dollar bill is cheap. Don’t throw datastore hits around no matter how fast they are.”

Pay attention to the datastore hits you are making and ensure that every request your application makes is absolutely necessary.

The challenges addressed in these presentations are all too familiar to Rubyists, and discussing solutions such as these is part of what makes the Ruby community so great. To RubyConf attendees, have fun soaking up all the inspiration! And for those who were unable to attend, or who may be hearing about RubyConf for the first time, I encourage you to check it out next year!