Subscribe
& more

Scaling For Complexity With Container Adoption

Episode 5

Scaling For Complexity With Container Adoption

With

//Matt Quill

Strategic Business Development Manager, F5

play buttonpause buttonListen to the episode

About the episode

When it comes time to move to the cloud, the concerns can be many. Companies are increasingly security conscious, and success depends on applications being reliable. There’s also the need for agility, to adjust to changes in the market. F5’s Matt Quill tells Burr how planning carefully and collaboratively can address challenges while building pivotal internal relationships.

Matt Quill

Matt Quill

Strategic Business Development Manager

f5 logo

Transcript

00:02 — Burr Sutter
Hi. I'm Burr Sutter, and this is Code Comments: an original podcast from Red Hat. Now, I've been talking to a lot of customers of late, really talking about their transformational journey. What does it mean to have that holistic experience from the world of virtual machines to the world of containers and the world of Kubernetes? Well, you have probably been on that same journey yourself, and I think it is super interesting that we drilled down on that topic. Because if you think about some of our larger customers here at Red Hat, some of them are in highly regulated and security conscious organizations. I mean, things like big government, big insurance, big finance, big banking, all of those organizations have very special needs where they do want to increase their security posture, increase availability around their applications including reliability, but at the same time achieve greater agility, and that is a tough thing to overcome; to basically have all those requirements working hand in hand and delivering a better solution. Well, the good news is we're not alone in helping organizations to navigate these challenges. On today's episode of Code Comments, I talk to Matt Quill. He's the Strategic Business Development Manager over our partner F5. And Matt has a ton of experience in this exact same area: helping customers transform and move along their journey from virtual machines to containers. Matt, welcome to the podcast. I look forward to our conversation today.

01:21 — Matt Quill
Great to be here. Thank you very much.

01:23 — Burr Sutter
Well, it is absolutely fantastic to have you because I know F5 of course, is integral and implemented throughout many of our mutual customers, both at Red Hat and F5. The thing I want to dive into is what we hear from all our customers right now, and that is the modernization journey. Everyone wants to talk about that keyword modernization, they all think, "We've got to leave these VMs behind, we got to move to the cloud. We got to move to containers." Can you tell me a little bit about how you think about that space that move from VMs to containers?

01:52 — Matt Quill
One of the things that I do in my job and I've been privileged to do, is that we do have a lot of customer contact where we can understand needs and drivers, and what really matters. So we see a couple of different trends going on in the IT space. One is the transition to the hyperscalers (AWS, Azure, GCP) is happening quite often, but there's also the challenge of things like shadow IT. So at the departmental level, people say, "Hey, I don't want to go through the traditional IT processes, write tickets, have the infrastructure gets stood up, have to take hours to days, weeks, months to get everything up and ready and going, I want QuickTime to value," because obviously time is money, so they move to the cloud which becomes difficult to manage and becomes unwieldy in many respects because if you have your central IT who has their security policies, they have processes in place and they're using those processes are applying those processes, especially at the security level within the public cloud as well, and trying to set up these hybrid cloud architectures. (02:51): So the driver is really for internal IT organizations to have the agility of one of the hyperscalers. So for example, bang, I can stand up the infrastructure, I can test it. I can test my code more rapidly and then roll my applications to production or my changes to production in a rapid manner without being disruptive, but doing so while maintaining a very hardened security posture. And one of the things that we find in talking to customers, there's this aspiration to be more agile. They have to be more agile because of the emergence of public cloud, because of the real demands on their business, increase time to value. But the challenge is that if they're in highly regulated industries - the federal government, financial services, telco - they also have to do it really smart. (03:39): They can't compromise the availability of their applications, especially the security of their applications. And so that's really what drives the conversation, and that's really one of the key reasons why we've seen a lot of adoption of containers. And the overall goal is to be more agile, to bring new revenue generating applications into production as quickly as possible, make changes, update and adopt the agile methodologies (DevOps, Agile, what have you) but do so in a way that is very responsible because obviously, there's risk involved in moving fast. So I find OpenShift especially, is that platform that merges those two desires at a platform level, and F5 can help. So the goal is to get stuff into production, make changes, updates, patches, what have you to that code, but do so in a responsible manner. And I find with Red Hat, both the automation play with Ansible and OpenShift are key enabling tools for enterprises, especially those enterprises in verticals that need to maintain their security posture, have started to adopt.

04:54 — Burr Sutter
Well, I've got to tell you, Matt, this has started off very well. We have a lot to unpack here. I think I could spend the next four hours talking to you about these kinds of things. And I can tell you one of the things I often do when I'm talking to audiences about that cloud native journey, microservices and the evolution of their organization, including the cultural change, the agile process adoption, things like CICD, deployment patterns, blue-green, canary. You mentioned availability, you mentioned security. Wow, there's a ton of things to dive into. So let's try to tease some of these things apart. What would you say is one of the most critical missing items or meaning the user, the customer just forgot about it, but they had to move to the public cloud, they had to move to containers, they were moving so fast, they just forgot about it and now it's a major problem, do you have any examples of that?

05:39 — Matt Quill
Yeah. So I think that for example, the first thing would be scale and performance. So item one is, I've adopted containers, I need to make them scale and perform. These applications have to perform as well or better than their traditional three-tier VMware type applications, so that's part one. The second one is applications that are mission critical and the aspiration is to move these, refactor and migrate these applications into a containerized environment. And the refactoring is also fairly complex, but then you have to think about, "Okay, I want these applications to fail over. I want them to be available if for example, they go down on one cluster, I want just the application to be up on the other cluster. I need potentially regional failover. Does that recovery business continuity plans need to be taken into account?" All of these things need to be considered, and then finally, obviously the big challenge is security. If applications are not secure, if container environments are not secure, if you're not following best practices in terms of security, then you'll have a lot of problems getting to wide-scale adoption of containers within your environment.

06:50 — Burr Sutter
I've had one customer that's specifically new to containers, had rolled out an entire Kafka broker in a single pod, and they thought that was going to be okay. And of course, when the pod goes down, they couldn't understand why their entire back plane was offline. So that was definitely a misperception of what it meant to be living inside a container, specifically inside a Kubernetes pod, wrapping that container image and they just didn't understand it. So have you seen some examples like that yourself?

07:19 — Matt Quill
Yeah. And I've seen one of the inhibitors to broader adoption of containers has been, first of all sitting down and designing it out. Second of all, bringing in your SecOps and NetOps teams who are typically the people who are going to say, "No, we're not going to introduce risk into this environment until we map out how this entire thing works. We're not going to change our policies and introduce risk into our environment without doing that." So I find a lot of times with some customers, they're kicking the tires. They've got maybe a couple of applications running, but they're really stuck or they've done exactly what you've done, which is they've taken an entire application or an entire workflow and put it on one pod, but then they haven't considered all the different things. (08:07): One of the challenges is, these decisions are being driven by application owners and that's great, but a comprehensive architectural review needs to happen before you start to deploy because otherwise you're not really going to realize the value of containers if for example, you don't consider all the different elements of the infrastructure, standing up a Kubernetes cluster, standing up an OpenShift cluster is fairly easy. I mean in fact, you can do it on ROSA or ARO in a matter of minutes. That's not the challenge. The challenge is now hardening it, scaling it out, making sure that you follow best practices so that these applications are secure and available, whether they fail over across availability zones, let's say in the cloud or you have a high rate set up between the public cloud and your own onsite private cloud. All of these things need to be considered, availability, failover, security. So if the SecOps and NetOps teams, the security teams and the networking teams are not at the table during the early design phases, that's where the friction comes, and that's where some of the challenges come into scaling out these environments.

09:11 — Burr Sutter
I'm totally with you there. I do see that all the time where people who have basically just assumed this cloud thing or maybe this Kubernetes or OpenShift thing are just like a magic wand. I'm going to just basically throw my applications over there and magic is going to happen, and those -ilities that you mentioned, I like to refer to them as "-ilities," the capabilities and availability and resiliency, and all the non-functional requirements still have to be met, and that is so critical. So I love that point, and I think we do want to spend some time on that because that is an area where I think our listeners and of course, our various Red Hatters as well and people over at F5 have to be thinking about on a regular basis. (09:47): So let's try to delve down into some of those items. You mentioned customers that have regulatory issues. Let's say they're a big bank or they're a telco or they're basically work for something where the government has said, "You must have some form of disaster recovery plan," that's a simple example. Some form of failover plan, you can't just go down and be down because we will fine you for that. Can you give me some more examples in that space where people have encountered a problem, failed, suffered, responded?

10:17 — Matt Quill
Yeah. So the first anecdote I would provide is a large scale financial institution that's critical to our economy. BIG-IP was resident in the environment, and the challenge was really one of performance and scale, so that was Part A in this OpenShift deployment. And so we had BIG-IP residents in the environment at scale, given the complexity and scale and size, and importance of this particular organization, that's important. And we implemented something called containering our services, which is basically an operator that is certified on the OpenShift platform. And it allows basically BIG-IP to have visibility into the message bus, for lack of a better term of the OpenShift environment, so that you can apply your load balancing policies and the containers will end up in the load balancing pool. You can have visibility into the applications and you can scale and perform as needed. So BIG-IP can act as BIG-IP and provide the value that it provides in this OpenShift environment.

11:19 — Burr Sutter
You mentioned earlier you can get the telemetry data now. So you have visibility into how the load is being distributed if things are being responsive. I assume you also have some form of liveness probe, liveness check in there as well?

11:31 — Matt Quill
Absolutely. So the nice thing about BIG-IP and NGINX is we have application-specific monitors that we've spent... F5 has been in business for 25 years. We know a lot about everything from commercial off-the-shelf products, so we've built specific monitors for Exchange, Outlook, SharePoint things, things of that nature, SAP. But we also can write custom monitors and things like that. We have network traffic control language that can provide advanced monitors and things like that. So we have quite a bit of knowledge of the applications themselves, and so rather than just doing a ping and knowing that the individual device is up, you can actually understand not only is the device or container available, but the application itself is available as well.

12:16 — Burr Sutter
Right. And that's a huge thing, and I can tell you from teaching basic Kubernetes to numerous developers, thousands and thousands of developers at this point, that the liveness probe and redness probe are one of those things they don't quite get right. They can't fully understand how those things operate within a Kubernetes world and why they're so important, so therefore I can see having additional instrumentation on top of that being incredibly valuable.

12:36 — Matt Quill
Absolutely. And in a perfect world, we would want to be... F5 combined with Red Hat, combined with some of the other players there, we would want to be there on day one of the conversation. When you're having that architectural discussion about your North Star, what are you trying to accomplish with your container deployment? How are you going to deal with all of these different questions about availability, security, failover, all those really critical issues. And let's architect a platform that makes sense so that you can have the agility, you need to go fast, we understand that, but go fast in a way that also allows you to maintain your security and availability posture. (13:15): Another key enabler of that is Ansible. On the BIG-IP we have spent tremendous amounts of effort from a product development perspective in building certified collections with Ansible so that you can apply security policies using automation on the BIG-IP. You can configure BIG-IPs in a rapid manner, spinning up and scaling your resources. Flexing, dealing with bursty situations in a rapid manner. Ansible is one of those key enabling tools in combination with OpenShift.

13:46 — Burr Sutter
Oh yeah. One of our favorite demonstrations of that kind of capability is when we have advanced cluster management which of course, is our Multicluster manager. And then of course, we have the concept of a policy, an instrumentation where it actually spins up those different OpenShift clusters, and one of the things that happens in that case is use of the Ansible Automation platform to not only file a ServiceNow ticket to save what it did, but also make sure it updates F5, it updates BIG-IP with the new entry so that they now have it within the global load balancer.

14:13 — Matt Quill
Right. And this is really where you can combine your security posture which has to be very hardened, especially with these heavily regulated organizations, but do it in a really fast manner. If you've got everything pre-canned and you've got integration with your IT ticketing system like with a ServiceNow, and you can rapidly respond to issues without manual intervention or with limited manual intervention, then you're really starting to adopt Agile, but you're doing it in a responsible way which is what these organizations have to do. I mean, they have no choice. They have to be fast because obviously, they need to be first to market, they need to get the infrastructure stood up because they're... In fact, internal IT organizations are competing with the cloud, but they have to do it in a responsible manner and maintain their security posture.

15:02 — Burr Sutter
Right. And I think that's probably the core of the story that we have to talk about, and that is this concept of applying policy, procedure compliance, regulatory rules, being able to do all those things but still in an agile way that allows the application developer to produce better code, new APIs, new capability, but at the same time ensure that they're meeting compliance rules, meeting the requirements for security rules as an example. When you're out talking to customers, how would you advise them? Often in many cases they dev as separate from security, which is separate from network, which is separate from ops infrastructure. Often, those are four different organizational silos that I've seen. Sometimes there's one silo who's renamed themselves DevOps, but they don't really have any of the other players in the room. There's no security people, there's no network people.

15:46 — Matt Quill
And in a perfect world, as you start this journey, having those players in the room to put together an architectural design and a plan is really critical, I think. More than anything, rather than the app devs sort of say, "Hey, let's spin up an OpenShift cluster or kick the tires and see what happens," you really need to be very deliberate in your planning, and how you architect it is going to really drive whether or not this is a successful deployment or not. (16:13): I mean, that is crucially what happens. Having those players or having representatives from each of those teams weigh in and come up with a process of using automation in concert with containers, in concert with the F5 stack or some of the other players in the game. Having those key players at the table during the design phase, during the architecture phase is going to be really important to deploy the stuff at scale. I've witnessed it being done well, but more often than not I've witnessed it, they deploy it and then they think about how to secure or scale it and so forth after the fact. Nothing wrong with that, but just be mindful that you're going to need to consider all these different items if you're really going to get to critical mass in terms of your container deployment.

16:58 — Burr Sutter
And of course, what we just tapped on there is related to cultural change, having those people in the room and especially those people who'd normally like to say, "no."

17:06 — Matt Quill
Absolutely.

17:06 — Burr Sutter
"No, you can't do that." I think that's so important because I even had one person come to me at one point and said, "Okay, you showed us all these great things you can do in microservices and new application development architecture and of course, there's all this cool CICD stuff and all these Agile things and DevOps, but X, Y, Z says no." And I asked that person a simple question, I said, "When's the last time you went to lunch with that person?" And you could tell I shocked them. They were like, "What do you mean?" I'm like, "I bet they eat too because I think they're humans also. So go spend some time with those humans, build a relationship, figure out what their real needs are, and their real needs are going to be very practical. We can't ship software that includes a critical vulnerability. We can't ship software that can't scale. We can't ship software that doesn't have a disaster recovery plan," so I see that a lot with different customers. Having those conversations is so critical.

17:58 — Matt Quill
And I think when we had the traditional the three tier application environment, this made sense. You had your storage guys, you had your server guys, you had your app guys, you had your network guys, you had your security guys, that makes perfect sense. And then you have a ticketing system that would spin up the storage, spin up the server and all that. Now, this has to happen fast. You have to push a button and all this stuff has to go. So this requires a substantial change in the way that people do business. Ultimately, it's going to make your organization much more resilient, much more efficient, and time to value of new applications is going to be enormous. (18:38): But if you don't start with a plan, and if you don't think about first principles, then I think you're going to have a problem scaling out your container environment. If you don't have the key stakeholders at the table and you don't have a plan to make sure this all works properly. Or you're going to have a less than resilient or less than agile environment where you're going to have to build everything, stop, log a ticket, wait for days, get them to deploy the BIG-IP or provision of VIP or whatever it is, and that's just not agile. You have to think about first principles in adopting containers and how you figure that out from a cultural change perspective.

19:18 — Burr Sutter
I'm glad you mentioned tickets because that is definitely one of those things that I poke at when I'm talking to different customers. It's like, "Okay, what is your ticketing system, and how much do you love it?" Because people are kind of married to it, and I don't understand why they feel they have to be married to it because it's a ticketing system.

19:33 — Matt Quill
Sure.

19:34 — Burr Sutter
And they're married to it from a workflow standpoint.

19:37 — Matt Quill
And chain of custody is really important. Understanding that you're following policies from a security perspective: very important. But there's a way of doing that using some of the automation tools that we touched on like Ansible or some of the other players out there, but Ansible we've gone deep with. So I would pitch Ansible to help make your environment more agile but continue to keep your security posture hardened.

20:02 — Burr Sutter
Exactly. And you'll see in some of the presentations I've delivered before where we actually have the ticketing system, but it's after the fact, meaning it's for the audit trail, the chain of custody like you mentioned, because the automation did all the real work, not humans and therefore it filed a ticket to say, "I did this, I've made these updates and now it's part of the system of record."

20:24 — Matt Quill
Absolutely. And what's marvelous is obviously individual organizations, there's a certain inertia there. It's going to take a while for people to make this transition, but it's happening. But the tools have all been built right between the work that F5 has done, some of the other players, Ansible and OpenShift, it's all there, the cookbook is all there. It's just a question of ramping people up on how to change this and make sure that... You mentioned ServiceNow integrating with ticketing systems. All of that stuff has been created. It just needs to be more broadly adopted and people need to trust it.

20:57 — Burr Sutter
I love what you say there, the concept of trust and adoption is so critical. And I definitely see that in my own adventures, going out and talking to different customers, spending time with them, that is exactly what I always see. It's a matter of what I call the cultural change, the understanding, the learning, the gaining of skills. And of course, everybody within an organization has a different learning curve and a different learning pace. So some of those organizational silos might get there a little more slowly, others get there a little more quickly, and if we could all have patience with each other and give each other grace, I think we could all get there better, faster together.

21:27 — Matt Quill
Absolutely. And I think there's education required. I mean, there's going to be education required and people are going to have to be held accountable. It's new, and obviously some people are less willing to embrace change, but I think the end result is going to be a quite elegant set of solutions that make sure IT organizations are much more efficient and delivers faster time to value, which is what the business demands more than anything.

21:54 — Burr Sutter
We certainly live in a digital economy now where those digital assets and APIs, those new apps if you will, have to get out the door ever faster, absolutely.

22:01 — Matt Quill
Correct.

22:02 — Burr Sutter
So if you had to summarize key learnings, key takeaways, just something to give our audience here like, "Okay, remember these key core elements, what would you say in that category?"

22:13 — Matt Quill
Sure. I mean, the key core elements are consider availability of your applications. Consider your first principles in adopting containers. You shouldn't adopt containers just because it's cool. I mean, you can go on AWS to something and kick the tires and so forth, but if your organization is adopting containers, you have to ask the question, why? And I think most organizations will be able to articulate. And then secondly, architect properly. Look at your North Star, understand what your business goals are and make sure that you're adhering to your policies in terms of security, application availability, failover to business continuity, all of those things that happen because those practices to serve don't go out the door simply because you've adopted containers. There's still data and there's still applications running on that platform and people need access to those applications depending on what business you're in.

23:04 — Burr Sutter
And I would say, users expect those applications to be up, ready, always and of course, secure.

23:10 — Matt Quill
Absolutely, and we all understand not only the financial risk of a hack or a security breach or something like that, but we also understand that the reputational damage could be even more extensive. So we need to factor both of those things in.

23:24 — Burr Sutter
I'm glad you brought up reputational damage. There's definitely a situation where you'd rather not have your CEO forwarding you angry tweets because somebody's data was leaked, or the system was down and offline or something major had happened because that does happen in our world at this point if you're the application owner.

23:42 — Matt Quill
Absolutely.

23:43 — Burr Sutter
Matt, thank you so much for our conversation today. I really enjoyed it. I learned a lot. I thought there was a lot of great things to touch here. Like I said, we could have spent hours talking about many of these individual things whether it be more aspects of security, more aspects of availability, more concepts of how to deal with the architecture in a cloud native world, how to enable that agility, so many things we could have focused on here. But I thank you for your time. It was absolutely fantastic talking to you.

24:10 — Matt Quill
Great talking to you.

24:10 — Burr Sutter
You can read more about Red Hat's partnership with F5 at redhat.com/codecommentspodcast. Many thanks to Matt Quill for being our guest, and thanks to all of you for joining us today. This episode was produced by Brent Simoneaux and Caroline Creaghead, and our sound designer is Christian Prohom. Our audio team includes Leigh Day, Stephanie Wonderlick, Mike Esser, Johann Philippine, Kim Wong, Nick Burns, Aaron Williamson, Karen King, Jared Oats, Rachel Ertel, Devin Pope, Matias Faundez, Mike Compton, Ocean Matthews, Alex Traboulsi, and Victoria Lawton. I'm Burr Sutter, and this is Code Comments; an original podcast from Red Hat.

Red Hat | f5

What we’re doing together

Red Hat and F5 offer joint services, solutions, and platform integrations that streamline production and delivery of powerful and protected business applications. Together, these solutions reduce application bottlenecks, automate workflows, and support enhanced application availability and scalability.

Check it out

F5 Networks builds innovative container service with Red Hat

F5 Networks builds innovative container service with Red Hat

Read the case study

More like this

Deep learning in the enterprise

Machine learning models need inference engines and good datasets. OpenVINO and Anomalib are open toolkits from Intel that help enterprises set up both.

Rethinking networks In telecommunications

Successful telecommunications isn’t just the speed of a network. Tech Mahindra’s Sandeep Sharma explains how companies can keep pace with unforeseen changes and customer expectations.

Aligning with open principles

It’s one thing to talk about your open source principles. It’s another to live them. David Duncan of AWS shares how extensive collaboration is key.