Scott (0s): Datadog is a SaaS cloud monitoring and security platform that enables full stack observability for modern infrastructure and applications. At any scale, provide teams, dashboarding, alerting application performance monitoring, infrastructure monitoring, front end monitoring and security monitoring in one tightly integrated platform. Plus 450 out of the box integrations with technologies, including cloud providers, databases, and web servers. Aggregate all your data onto one platform for seamless correlation, enabling teams to troubleshoot and collaborate together in one place preventing downtime and enhancing performance and reliability. Get started with a free 14 day trial by visiting Datadoghq.com/hansel minutes. Scott (46s): That's Datadog hq.com/hansel minutes. Hi, I'm Scott Hanselman. This is another episode of Hanselminutes today. I'm talking with Roberta Arcoverde. She is the director of engineering at Stack Overflow, and she's coming to us from Sao Paolo. How are you? Roberta (1m 14s): Hi, I am great. Thank you for having me, Scott. Scott (1m 16s): So you're the director of engineering at Stack Overflow. What is your responsibilities? What is that? That you've been doing this now for eight years, you've been at Stack. Roberta (1m 23s): I have been at Stack for eight years, but as the software engineer. So I joined as a software engineer in 2014 and I was a staff engineer up until January, 2022. And that's when I pivoted to management, which can be seen as a promotion or a demotion, depending on how you feel about management. In my case, I was fortunate enough to be managing now the same team I was working on as an engineer before. So it's at least it's a little bit more comfortable. Scott (1m 51s): So you were as a staff engineer, you were a tech lead and now the title director of engineering, I always wonder tech lead means you're leading the tech, but director of engineering feels more like you're herding the cats. Like, are you meant to be the most technical or are you meant, meant to help the most technical work out their problems? Roberta (2m 8s): I think I'm meant to herd the most technical cats. No, I'm joking. I think that the director of engineering is a management position. So now I manage the people, right? I manage their careers, I help them grow. And I also have a strategic vision of where we need to be. So it's more of a position it's very similar to what engineering managers and other companies do. But it's a little bit more technical than that too, because I do need to keep my eyes on where the software architecture is, is moving towards what are our strategy in terms of scale and where we want to be and not in the next PR, but in the next three years, Scott (2m 51s): So much more long-term planning and being, making sure you're not surprised by changes in technology. Roberta (2m 57s): Exactly. And many more one-on-ones Scott (3m 1s): Tell me about it. It's it's review time at Microsoft and it's just one-on-ones and evaluations for days. Sometimes it's all I do is meetings and that's no fun because I want to, I assume you look at the code at least to refresh yourself. Sometimes Roberta (3m 15s): I do. I actually strongly believe that first-line managers like myself should be writing code on a daily basis. I think it helps me personally stay technical. It helps me know how to better mentor engineers on my team, junior engineers. And it also helps me to assess the impact of changes proposed by senior engineers. So imagining that there's a big redesign upcoming the fact that I keep writing code, that I keep my eyes on where the software is going. That helps me a lot to just, you know, ask the right questions, I guess, and help them get to the best results Scott (3m 54s): I want to get into architecture and opinionated architectures. But why don't we take a step back and let's think about what runs Stack Overflow right now. Cause Stack Overflow has been around forever. You know, when you think about a Twitter or Facebook or Stack Overflow, these are big sites that serve tens or hundreds of millions of people. What is the architecture behind Stack Overflow? Roberta (4m 14s): Well, right, Scott it's a little bit peculiar, I would say, especially compared to other big sites, like the ones you mentioned Stack Overflow launch in 2008. So we are now at, we have a code base that is 14 years old. We run on prem, we run on our own data center. We haven't gone to the cloud. We also have a monolithic application. So we have not breaking down into services or microservices. And we have a multi-tenant web app, .Net based, running on a single app, pull across nine web servers only. And that same application single app pool on IIS is handling 6,000 requests per second at the moment Scott (5m 1s): 6,000 requests per second. Roberta (5m 4s): That's correct. Scott (5m 5s): Okay. So let's try to unpack this a little bit. Well, you've been there while Stack Overflow has been around. IIS has come and gone for some Apache has come and gone for some the rise of NGINX, the rise of Kubernetes. These are all bandwagons that you could have technically jumped on and ran with. Which ones did you choose to ignore? And which ones did you choose to think "maybe this is something that Stack Overflow needs"? Roberta (5m 34s): We managed to ignore a lot of them, I guess the most recent one being a microservices and Kubernetes and all that jazz. And the reason has always been because one of our strongest philosophies in engineering at Stack Overflow's ever since I joined. And it's still a little bit like that today is we always start from asking the question, what problem are you trying to solve? And in our, the problems that these tools and bandwagons try to solve are not problems that we were facing ourselves. So when you think about things like a monolith, for example, right? Why do you break down a monolithic to microservices or services typically because you want to scale to separate teams. Roberta (6m 18s): You want to have multiple teams working on the same project without stepping on each other toes. You want to have fast deploys. For example, fast deploys have never been a problem to us. We put Stack Overflow in production multiple times a day in four minutes. That's the time it took to build Stack Overflow to Prod. If we had to revert a change, for example, it was always super fast, right? In a matter of minutes, we could revert our deployments, even though it is still a monolith, not a bunch of really small applications, but a single big application and invest in efficiency on those things. You know, all the Accelerate metrics, all the things that great book, by the way, one of my favorite books in software engineering practices lately when we read that book, that's when we figured out, oh, okay. Roberta (7m 5s): So that's why we don't need to actually change that much. Right? We are not too chatty we're actually doing okay. In all these metrics. Our lead time is great. Our time to merge is good. These are not things that are painful to us. So what problem are we trying to solve by moving to microservices? And that question didn't have a good answer until now, cause I'm not promising anything. Maybe in a couple of years, we find out that actually the team has grown so much. It's becoming really hard for engineers to be working on the same code base together. But so far our engineering teams working on the Q&A platform, I think we are at around 50 engineers at this point. Scott (7m 48s): Five zero? Roberta (7m 49s): Yes. Scott (7m 49s): Wow. Roberta (7m 50s): So, so far it hasn't been that big of a problem. It's starting to become one though. I'm not gonna lie, right? Because back in, when I joined the company in 2014, I think there were 10 engineers working on the Q&A platform. So it was much easier for those 10 engineers to have the entire code base in their minds. They built it from scratch and they were evolving it by themselves. But now that we are hiring more, that our teams are growing, it's becoming trickier to onboard new engineers in this 14 years old code base. Perhaps we will find ourselves in a situation where, Hm, actually, shouldn't we break down this specific module into a service perhaps and give it to a specific team and have them own it so that they don't need to understand the entire code base anymore. Roberta (8m 38s): We are not there yet. I don't think this is a problem that we are facing right now yet, but we know we are pragmatists. That's at least how I see our culture. Like we go for the most pragmatic choice. So if it comes to a time when we need to do that, that's certainly something that we would consider. Scott (8m 56s): I want to make sure that folks who heard you mentioned the book Accelerate by Dr. Nicole Forsgren, Jez Humble and Gene Kim. Nicole was actually episode 648. So if you go to Hanselminutes.com/648, you can check out that episode as well. So I'm hearing you say that you didn't have a lot of the pain that some of these new technologies and new thinking around technology are there to solve. And these technology problems, these technology solutions exist because they solve some painful problem, but you had a mature DevOps model perhaps earlier than most people who were, you know, incorporating DevOps into their, into their companies? Roberta (9m 37s): Perhaps saying that we had a mature DevOps model is a little bit too optimistic, but we just didn't have let's think about it as, from the perspective of why not being in the cloud, right? Why being on prem? We had engineers that were very experienced dealing with infrastructure and DevOps, and that built a solid infrastructure over which we could deploy our applications. We also didn't need to be building and shipping and releasing new apps all the time. We had a single monolithic application that we just needed to deploy. So there was never a need for an infrastructure on top of which we could deploy new services, you know, like create a new infrastructure for a new application. Roberta (10m 20s): That wasn't something that happened too often. And because it wasn't something that happened too often, it wasn't something that we had to solve for. And a lot of new things like Kubernetes and the cloud, they are there to facilitate deploying new applications, right? They are, they have a lot of tooling around ensuring that that process is a little bit less painful, but it was just not a problem that we had because we were not doing that all that often. Scott (10m 47s): It seems like it would take a certain amount of organizational willpower or maturity to say, no. I see so many projects say, "all right, we're, we're going into our microservices", like stanza, we're going to, we're moving everything. And then they work for a year or two years and they have the same app, except now they've changed what's underneath it. W w who was it? Who was saying no. Or was it you as a team collectively saying no to these new things, these new-fangled things, Roberta (11m 16s): To be honest, Scott, I don't think that we had to say no, because it was never brought up as something that we had to do. Right? Like it, it was part of our philosophy to always ask, like I mentioned before, what is the problem that we are trying to solve? So if you tell me why we should go to the cloud, if someone had come to us back then and said, we should go to the cloud, they would hear the question why they wouldn't hear no. Right. And perhaps there is a very good reason why we should go to the cloud. In fact, I won't tell you again, that this is not something that we consider and reconsider every now and then. And there are definitely many advantages of being in the cloud specially from an infrastructure perspective. Roberta (12m 1s): But as long as the answer to "why" was not satisfactory, or we had different priorities at the moment, then there wasn't really a reason to make such dramatic change in the way we did things. Not to say that we should not consider it again. Right? But at the time, I don't think no one was saying, no, I think they would just ask him why Scott (12m 25s): That's really cool. Cause today I'm wearing my Ted Lasso t-shirt and it says: "Be curious, not judgmental". And that's kind of what you're describing is like, well, okay. I mean, I can see why, and we ask questions as opposed to like, no, that's a, that's a bad idea and be immediately judgemental, which is really, that's a sign of a healthy organization. Roberta (12m 45s): I think so too. And I think that's part of what our culture was founded on. Right? Like asking, being curious, asking the right questions and trying to solve problems for everybody. That culture is so like that even 14 years later, we're not married to any decisions that we made in the past. So we are constantly re-evaluating and changing. Like for example, there is a lot of conversations that are a lot of conversations going on right now about re-evaluating like, what are the parts of the monolith that we should be breaking down now that we are growing and especially preparing for the next stages of growth, right? And the discussion in itself is super valuable. Roberta (13m 27s): And the fact that we can have it in a nonjudgmental environment and be willing to change it for the right reasons and knowing how to measure whether or not it's the right reason. To me, that's also a very good sign of a healthy organization. Scott (13m 44s): That's cool. At the end of the day, your customer has to be at the center of everything you do, the starts with the right customer data strategy, as well as the right foundation to solve the challenges typically inhibit the success of your company, such as data, quality, data governance and connectivity. M particle is your real time customer data infrastructure that helps you accelerate your data strategy by cleansing, visualizing, and integrating your customer data from anywhere to anywhere. Ultimately better data leads to better decisions, better customer experiences and better outcomes. See why the best brands choose M particle go to M particle.com. That's M P a R T I C L e.com. Scott (14m 25s): So uhm two more technical questions. You are a fan of vertical scaling. You said that you're doing 6,000 requests a second on, on a single web server. You said you had nine total. And how many page views are you doing a month? Roberta (14m 40s): Yeah, we have nine total. It's six, 6,000 requests per second out of which I would say 80% of page views. So I won't be able to make that math for you. Scott (14m 49s): Okay. But it's, it's tens of millions of page views a month. It's a lot. Roberta (14m 52s): Okay. Yeah. So we do roughly 2 billion page views per month. Scott (14m 58s): Okay. Roberta (14m 59s): That's quite a lot, but not on a single web server. We do it across nine web servers. It's still single digits, but, Scott (15m 5s): But nine. So that's so not the cloud. Right. I thought you'd have hundreds and hundreds of Kubernetes nodes across multiple hybrid cloud. Does, you know, an Azure dah, dah, dah, dah, dah, you know, you have nine metal boxes and you can go and touch them. Right. You could go down there and, and say, that's that one, that's web three right there. Roberta (15m 24s): Yeah. And that's exactly how we named them. Right. We have an Y web or one through and why, why by three, in New York, we have a data center in New York, what technically in New Jersey, but let's not go there, but yeah, 2 billion page views a month, roughly. And those servers, it's also important to note that they run at 5 to 10% capacity. So we could, in theory, be running on a single web server. We wouldn't want to do that, but theoretically it would be possible. Scott (15m 53s): Okay. So that's really interesting. So I recently had a, I was on a physical server, Hanselman international, which is me the.com and Hanselman it's we're on a single web server under someone's desk in Canada. And then eventually that machine died and I moved it up into the cloud. And I, when I teach people how to move things into the cloud, I let them know that in the cloud, you can run things a little hot. So I'll run it at 60%, 80%. And I don't feel bad. I would never do that. It's like, it's the example I always give is driving a rental car versus driving your own car. Like with a rental car, I'm going to push really hard on the gas and I'm going to rev the engine and you're going to like, why are you treating the car with such disrespect? Well, it's not mine. Scott (16m 33s): So the cloud runs at 70, 80%, but you're telling me that across all your servers nine, right now, your 5 or 10 kind of idling almost. Roberta (16m 43s): And that's important for keeping in mind that Stack Overflow was built to scale that way. Right? We are designed for low latency. We are designed to grab requests, run them as acute, a few queries and return as soon as possible so that we can pick up the next one. We cannot handle a lot of memory pressure. So we also designed for low allocations everywhere. We try to avoid creating objects that will have to be collected very often. We try to avoid memory pressure on those nine web servers so that we don't have to stall on garbage collections because a stall and a garbage collect is terrifying for these web servers. So we try to run smooth and with a very low memory footprint, because that's how that scalability model was designed to work on, on the infrastructure that we have. Roberta (17m 34s): And that's exactly why running at five to 10% is not just because we can, but because we actually need it also gives us room to grow where to make mistakes, right? And then we can catch, huh? There's something weird going on. Cause memory today is around 50%. It shouldn't be. So we take our time to analyze memory dumps, to see where is it that we are allocating perhaps more than we should. There are more than we used to. And that's when we keep optimizing, then this has been our development process since the beginning, right? We measure, we find bottlenecks, we solve them rinse and repeat. That's how we ended up with things like Dapper, for example, or micro ORM that Mark Ravel built during his tenure here. Roberta (18m 19s): That is literally an or, and that allows you to run queries against a database with a very, very small memory footprint. So it's highly optimized. And because of that, it has trade-off. So you have to write your own SQL, which is not something that bothers us, but it doesn't our cake because it has a very interesting and smart mechanism for caching, how objects needs to be hydrated. And that's how, and, and we built it because we had to, right. We built it because we had to have this low memory, low allocation kind of design model. And that's what allowed us to grow and to stay where we are right now, running on nine web servers for years, we haven't bought a new machine in years. Roberta (19m 1s): At this point, Scott (19m 2s): For folks who are listening carefully, Roberta was referring to Dapper DAPP E R that's the object mapper, not da PR the sidecar container system for Kubernetes. So Dapper, you can find a DapperLib or DapperLive.github.io. So memory allocation, memory allocation is a problem in .NET because you know, garbage collection is, is at its core, but a lot of improvements have been made in garbage collection over the last 14 years. But it sounds like you're still super careful about any kind of unnecessary allocation. Is that something that you should be thinking about as a .NET Developer? Roberta (19m 39s): We are, but we have benefit immensely from the improvements in .NET as well. Right? When we, I remember that when we upgraded from .NET, I think it was 3.125, couple of years ago, if I'm not mistaken, that was measurably improving how we, our memory numbers, right? Like I think it decreased from, I'm saying 5 to 10%. It probably went down to 3 to 6% or something. And that's also something that we have observed now a couple of months ago when we moved to .NET 6, however, because we are already so optimized, right? We already take care of not creating more instances of objects. Roberta (20m 21s): And we have to, the code design is very much based on static service locators rather than object instances being injected into their dependencies. And again, trade-offs so.net has definitely made the new versions of that net have definitely made our lives easier, but to be quite frank, it didn't make that much of an impact given how the software was already built. So we already had such an optimized code base that those improvements are free wins for us, which is great, but we were already running very lean. Scott (21m 2s): Yeah. I'm on hanselman.com. I keep everything in memory because it's a read-mostly environment, right? Comments come in. But for the most part, it's 95% reads. When I worked on websites like eight hundred.com, which were had product catalogs and shopping carts, we would imagine a slider bar where like 90% of people are in the product catalog, looking at stuff, and 5% are checking out what is the relationship between reads and writes on Stack Overflow? Roberta (21m 31s): Oh, that's an interesting question. So first of all, it's important to note that 80% of our traffic is anonymous. People go to the question show page, which is the page that we show when you are Googling something and you get a result that takes you directly to Stack Overflow the solution to your problem, right? And you should leave. Usually people do that. They see the response. They imagine whether or not that helps them, they copy-paste something perhaps, and they close the tab and that's it. And that's 80% of the interactions on our site, but we have 20% that engaged decide somehow right in that are effectively writing something, which doesn't mean that we can cache our question show page that heavily you mentioned that you keep everything in memory for your blog. Roberta (22m 18s): We actually do not Scott (22m 20s): Really. Roberta (22m 20s): Yeah. Scott (22m 20s): It's gotta be some cache in there. Tell me your caching strategy. I'd love to understand. Roberta (22m 25s): We do have, of course we have two different levels of cache on, on the front, right on, on the memory and the web servers. And we also have our SQL Server servers. They have 1.5 terabytes of RAM. So a third of the entire database can be very quickly access the in RAM, which is something that I don't think you can get in the cloud yet, or perhaps it's very expensive. Scott (22m 49s): Okay. So let me just see if I can get an understanding or I need it. I apologize, but I want to just level set. You have one point, how much, 1.1 0.5, 1.5, 1.5 terabytes that's I think 1500 gigabytes of Ram, which is only a third of the database, but at the same time as amazing because it's a third of the database, because my next question was going to be, can you not hold all of Stack Overflow in memory at once? It sounds like that would be a challenge. Roberta (23m 16s): Yeah, that that would be a challenge. I imagine we cannot, but I will say like, we don't actually cache the question show page, which is our hottest path rate. 80% of the traffic goes to that page. And the reason why we don't cache it is because we actually did cache it before using Redis at the time and also the output cache of ASP.Net Core. And then we found out through measuring that the hit/miss cache rates was actually not that great. We had a lot of items being cached at we're just never hit and then expired because the distribution of access on that page is so broad. There are just so many questions. Roberta (23m 57s): There's so much content that for different people to hit the same page within a window of time, that made sense for content to be on the cache. We just were over-complicating our code worrying about cache and validation and not really benefiting from it. So we removed all cache and that was like three or four years ago. We stopped caching that page. We stopped caching the content and little did we know it didn't really make any measurable effects on the performance. We were still able to handle requests and send responses in super fast because of how the architecture was built at the time. So currently for example, the average time to render on that page is around 20 milliseconds. Roberta (24m 41s): So even without cache. Scott (24m 41s): Wow. Do you, have you, have you ever, as an, as a, I guess a research project thought about if it could go to the cloud and what it would look like, because naively, it would be by nine Godzilla sized computers in the cloud, and then you don't have to worry about power and, and network and you just run the same architecture in the cloud. Like where would you even start? Roberta (25m 4s): Yeah, Scott, actually, we have as an exercise, thought about it many, many times over the years, right? I remember when I joined, we did this regular exercise where we would try to understand how much it would cost to run Stack Overflow in the cloud. And it was just never worth it, which is why we never did it these days. When we think about the cloud, we are thinking less about the power that it would take and more about latency. The other thing that we optimize for a lot with the current design is low latency. So we have an infrastructure that has single hops between nodes and those hops are connected via 10 gigabytes network cables and Nick Craver will tell you all about it. Roberta (25m 44s): Cause that's his jam. That's all he likes to do. Even though he's not working here anymore. Network stuff is all that he loves and, and talks about all the time, but that's an infrastructure that's very hard to mimic on the cloud. So we would need to understand what are the things that would break first. If we try to lift and shift, right? If he try to make Stack Overflow, run in the cloud and what are the strategies that we need to prepare or to be ready for, if this is something that we indeed want to do in the future. Scott (26m 12s): Yeah. Roberta (26m 13s): We still do those exercises regularly. We have a lot of people, brilliant developers and SRS in the company who are Azure experts who run these exercises themselves. And we are constantly re-evaluating okay. Maybe we cannot put the Stack Overflow database in the cloud just yet, but what are the services that we could be moving to the cloud? We have some stateless services that could be running as, as your functions, for example, right? What are the things that we can move piece by piece? And what does that give us? Right. What are, again, what are problems that we could be solving potentially if we did that, Scott (26m 48s): Right. And that's the thing, if it's not a problem, then we just, we moved it and then now we're at the same place. Was it worth it? You know, I know exactly. Yeah. Your face looks Mike line. You're like, yeah, we could do that. But I don't know. Interesting. And what about like a pull request and such like one of the things that I love about like having a nice DevOps system is that every pull request can spin up a little container and I can go and say, it's pull request 42 and I can go to pull request 40 two.whatever.hanselman.com and see a mini version of my site. Like I assume you, can you just fire up Stack Overflow on your laptop and can you fire it up in a container and can you see it per pull request? Roberta (27m 27s): We can. Yeah. That's actually one of the biggest wins in dev experience, right. Which is a topic that's also on fire these days. When I joined Stack Overflow, we didn't have a very sophisticated way to run it locally. So we would install a bunch of tools and run Redis and run Elastic search on a Vagrant VM somewhere. And that's how you would connect to all the services and dependencies at Stack Overflow users, but it was all local. And the good thing about it is that I could run and work on Stack Overflow on, on a plane without internet, right. It was all running on my local machine these days. And like I mentioned, one of the biggest wins that we got is that, of course working locally is great. Roberta (28m 10s): Having a local environment is great, but when you need to share what you're working on with other people, then you need start, oh, can you please check out my branch and run it locally yourself? And you know, maybe see what are the changes that I'm making. Take a look at em, fast them and especially for non-technical people or for non-developers. If I want to show something to my PM, I can tell them, Hey, can you please check out my branch on your machine, which is not a dev machine and try to run this software right here. So our dev X team or developer experience team, which is the team in charge of building things that make our lives as engineers easier recently have built this capability for us to also have runnable executable PR environments, right? Roberta (28m 57s): And it's also using containers. We use .NET Core, so everything can easy to port. So the way it happens is you add a tag to your PR and a specific, we have a specific tag that you can mark a PR with, and that will spin up for you immediately, a URL where you were application will be deployed and also other URLs for dependencies and services that we use on the side for things like sending emails, right? Let's say that I'm testing a specific flow that ends up centered on email. We have also URL connected to that PR environment that will act as the output of emails. So we can actually troubleshoot, see emails, see what they look like, click links if needed. Scott (29m 39s): So what spins those up? Is that a cloud service or is that just one of the slack men? You have a, you have nine servers are using one of those. Roberta (29m 47s): That's a cloud service and that's also, we're using Kubernetes behind the scenes if I'm not mistaken, but at the end of the day, it's all containers. Like I said, we will use the right tool for the job if we need to. We just haven't had to yet for the main application. Scott (30m 4s): Well, pragmatism is what I'm hearing. Just comes down to productivity and pragmatism. Those are the words of the day. My last question. What about Stack Exchange? You mentioned multitenant. If I hit politics dot stack exchange or whatever, is that hitting one of those nine servers or is that another? Roberta (30m 19s): It is, yeah, it's a single multi-tenant monolithic application. And that's perhaps why the deployment has always been so fast and worked so well for us, right? Because we only need to deploy one application that powers all 200 sites that we have currently. Scott (30m 33s): That's amazing Roberta (30m 34s): Around 200 sites, if I'm not mistaken. So all Q&A sites that are powered by stack exchange network, arcade, serverfault, superuser stack overflow, cooking. One of my personal favorites. When you hit that on your URL, you land on the same .NET web application on the same app pool and based on host headers, that's how we know which database to hit, which CSS and static files to load everything is based on what actual URL you're hitting. Scott (31m 4s): What is sitting in? Do you have any kind of a front door or an F five or a Cisco local director or some kind of, what is your reverse proxy that's sitting in front of that? Because one of the things that I have noticed, you know, when you're rolling out a new update across nine servers is that one could potentially hit one. That's not ready yet. And then get that, get that I'm restarting a kind of moment. How do you prevent those start-up issues? Roberta (31m 28s): We have rolling builds. So we have those nine web servers. They are under an HAProxy front, and every time that we need to, to run a new, to deploy a new version of the site, we take a server out of rotation, updated there, put it back on rotation. Scott (31m 43s): Okay. And did you have health checks? Like I have a secret health check URL at hanselman.com/health check. Roberta (31m 50s): We do have a lot of metrics and health check and observability cause you need to, right? Regardless of being on the cloud or being on prem, good observability is very important, but yes, we do have health checks and, and alerts connected to those health checks so that when, if one server goes down after an upgrade, we stop the process and we troubleshoot, rinse and repeat. Scott (32m 12s): Very cool. Well, this has been such a fascinating conversation. I really appreciate you hanging out with me today. Roberta (32m 18s): Thank you, Scott. That's an honor for me. I'm a longtime fan of the show. So thank you for having me again. Scott (32m 23s): Well, thank you for your work. We've been chatting with Roberta Arcoverde. She is the director of engineering at Stack Overflow. You can check her out on Twitter at twitter.com/rla4. RLA and the number four. This has been another episode of Hanselminutes and we'll see you again next week.