NewsroomPress Releases In the News
Learn about our DevOps implementation for app development within salesforce
Rym Badri: Hello, and welcome to our webinar, Salesforce DevOps for the digital enterprise, quality and speed at scale. My name is Rym Badri and I’ll be your host today. Today’s presenters are Manish Mathuria, co-founder and COO of Apexon and Vijay Shyamasundar, solutions architects at Apexon. Manish is an experienced technology consultant and business leader with a passion for building winning teams. In his position as chief operations officer, he is responsible for global delivery, operations and innovation. He has served as a senior technology consultant across quality engineering, software development, cloud, SaaS, and mobility for fortune 500 companies and Silicon Valley startups. Vijay is a seasoned solutions architect and DevOps consultant at Apexon. With over 15 years in enterprise application development and delivery, he has been a strong proponent of leveraging automation and DevOps to optimize the value stream. He assists enterprises with all aspects of DevOps, including maturity assessments, strategy and implementation, and regularly speaks on these topics at conferences and webinars. And before we begin, let me review some housekeeping items. First, this webinar is being recorded and will be distributed via email to you, allowing you to share it with your internal teams or watch it again later. Second, your line is currently muted. Thirds, please feel free to submit any questions during the call using the chat function or the Q and A function on the bottom of your screen. We will answer all the questions towards the end of the presentation. Finally, we have two poll questions set up during this webinar, feel free to participate and answer using the Q and A function. We will do our best to keep this webinar to the 45 minutes time limits. At this time, I’d like to turn over the presentation to Manish.
Manish Mistry: Thank you, Rym. And thanks everyone for attending this webinar. We hope to cover the critical aspects and critical points about why we believe DevOps for Salesforce implementations is currently getting more and more prevalent, more important and how it has become one of the critical success factors for any Salesforce project. So what will we cover today? First and foremost, we want to start with why DevOps is important. What do we mean actually when we talk about Salesforce DevOps, right? And we also want to kind of add to that in our trends and our observations as well as industry report and industry trends about why this is important, right? We’ll then jump into the traditional Salesforce challenges when it comes to building code, building customizations, business rules and data, and putting that in production for Salesforce. And then we will specifically talk about how to modernize that entire process and how to build efficiency and automation in that process. We’ll talk about that through a reference architecture, we’ll talk about the best practices that we have come to follow when it comes to Salesforce and DevOps, and finally, what is a good presentation without a demo? So we will show you an actual pipeline working around Salesforce where the entire build code, test and deploy process would be automated. All right. So let’s get started. So let’s talk about what a Salesforce DevOps means, right? As we talked about it earlier, you may be wondering when it comes to deploying Salesforce, it is in the cloud. So what is the actual problem there and what are we really trying to solve? But it is not just about creating a build and testing and deploying it to the cloud for Salesforce, DevOps is more of a mindset, right? So when we talk about actually looking at the entire software development life cycle around Salesforce, the following points come to mind. How have we made the process of managing versions, managing various developments sandboxes, integrating them across various changes that multiple developers are doing testing them and incorporating various quality gates in the entire build process, automatically deploying it to the target environment, which could be staging environment or production environment. And in addition to that, once you have deployed it, how do you actually track the deployment? How do you track the thing in production and ensure that the right amount of monitoring is put on it such that it creates an important feedback loop for making this process more optimized and more efficient? So the goal really here is, how do we take the current software development life cycle, eliminate all the manual kinks in it, make that process automated and implement that such that the manual dependencies are removed from the software development life cycle? That’s what we will be talking about today. Why do we believe DevOps has become more achievable and more important in the recent times? There are other time when the actual act of writing code around Salesforce and testing it, putting it into production or the target test environment was a very laborious process in which developers had to sit around a particular desktop and try to figure out how to merge changes. Salesforce has taken this problem very seriously, and they have come up with several enablers. Some of these are mentioned here, the ability to create scratch orgs, the DX technology, the ability to develop modular packages within Salesforce, and actually the ability to integrate the entire development process with a traditional source code management system such that you can check in your code, you can look, create, pull requests around it. You can do various static and dynamic checks on the code. Everything is possible, right? So we’ll get into specifics of how this can be implemented. Before we do that, it’s important to see for what kind of implementation of Salesforce we believe DevOps is important, right? So if we take various deployments and implementations of Salesforce, we try to bucket them into three categories, small, medium and large, right? What we find is that medium and large implementations have these characteristics where there is very significant of amount of custom code, custom configuration, your business rules, integration with other third party systems, right? Those are very prominent in a large or medium enterprise. So typically you would have any number of developers and testers and business people working around, doing that implementation and there is a heightened need to manage that process in a more than manual way where things can be checked in, things can be version control, configuration managed and so on and so forth, right? The cost of failure is too high and the cost of doing things manually also gets beyond the capacity of what a human being can do. So we believe that in a medium and a large enterprise, it is very important that we take automation DevOps and continuous testing very, very seriously. Now, if we look at the industry, right, these results are coming from two factors, coming from industry data and actually our observations. We have had multiple instances where we have worked with our customers, medium to large customers, to help them implement Salesforce from testing and DevOps perspective. And what we invariably find is there is a significant ROI, right? The meantime to recovery is much, much, faster when you have implemented and taken care of DevOps processes. There is a less chance of failures, right? And overall, you will have a significant reduction in cycle time because of test automation and DevOps processes, right? So if the goal is to follow an agile process and be able to release software to production, if not multiple times a week, at least weekly or biweekly, then leaving these processes manual comes significantly in the way of doing that, achieving that goal in an efficient way, right? So with that background, what I would like to do is invite Vijay, who has had a personal exposure as an architect working on several of our Salesforce DevOps initiatives to talk about what are some of the best practices and a reference architecture around the software development cycle and DevOps processes around Salesforce. Vijay.
Vijay Shyamasundar: Thanks, Manish. All right. So traditionally, Salesforce development has been a lot more sandbox-driven than source-driven, right? And we’ve seen a lot of challenges with that. With one of the credit union companies that we worked with, they had like individual developers use the respective development sandboxes for developing and then they used Gearset set and they would sing their changes from the source to destination, in which case there would be singing to some sort of a QA sandbox. That was kind of working fine initially, but when the team size grew, and they also started working on some conflicting features, that started to become a lot of an issue, especially when something went wrong and they wanted to undo it, there would be a lot of time spent between development and administrators to kind of get that back into shape because there was no single source of truth, right? So that on most conflicts and all that was a big challenge. Another company we worked with in the last couple of years is a medical device company that builds tech to beat cancer. And this is a very, very complex environment. There were a lot of third party integrations like SAP systems and service now sort of thing, custom backend, and Dell Boomi sort of a platform to orchestrate. So there’s a lot of that environment management was being done manually, and them being in a medical device industry, it’s very highly regulatory requirements and compliance requirements and audits. So a lot of that, it was kind of more restrictive in order to do follow some of those practices, right? So there was a little bit of a resistance in terms of adopting these things, but that actually kind of what they realized is by implementing these challenges, implementing these modern ways of doing, they could actually overcome those. For example, if feature development was done by a particular team, it would take them almost two weeks to kind of get the feedback because there you have to get on a release calendar and wait for the environment to become available. So there was a lot of delays in getting the feedback and bottom line is, teams could have gone to production at least twice as fast if not for these bottlenecks. So I think that-
Manish Mistry: Did you try it? I remember in their particular case, all said and done, in addition to this 50% saving in cycle time, we were able to save at least the part standard and a half cost from the development budget that they had-
Vijay Shyamasundar: Correct.
Manish Mistry: … which was pretty significant. It brought in a savings of more than 15% to their overall project budget.
Vijay Shyamasundar: Correct. I mean, a lot of even like little things, right? For example, simple thing that you could have managed through a process and a tool, from a collaboration perspective, it would require a lot of either meetings from various different cross-functional teams, et cetera. We’ll talk about some of that in the future slides and also in the demo, but apart from just the environment management and source conflict, et cetera, there are even a lot of the quality gates approvals, collaboration. Those are the little things that add up and really impacts the developer productivity, right? So these were some of the challenges that we’ve seen, and those are not just to these two cases we talked about in several, especially around environment and collaboration and quality gates approval. It’s a common challenge across various enterprises that we have worked with. So, as Manish said earlier, so the DevOps adoption and maturity has significantly increased over the last couple of years, thanks to those first party tools, from Salesforce themselves, and also some of the tools in the community. So this is kind of, from our perspective, what a reference pipeline would look like for a modern Salesforce implementation, right? So we have intentionally kept this tool agnostic, because there are potentially various solutions that you could use to implement it, but end of the day the pipeline, the philosophy still remains the same and holds good respect of whatever commercial open-source tools you choose to use. So the source of truth here it would be our version control system, it could be a good Bitbucket, et cetera, and teams would leverage several SCM workflows for the purposes of this discussion here. We’ll just keep it simple. Like there’s a master branch which kind of reflects what is in production and your release branch could be your up and coming release that your teams are working on. It’s a parallel development these days, multiple developers contributing, they kind of create their own feature managers and work on their features in that branch. So we’ll talk about how that propagates all the way from the developers machine two, all the way to production, and in this parallel development sort of a scenario. So basically, from the feature brands like the developers, would be working on their local, thanks to scratch org, a lot of those things that develop could be tested in a metallic nature. So you could add your feature code and the test and you could run all of those things in your scratch org. And once that looks good to a particular developer, he who would raise a pull request and then somebody at a leader or an architect level would review and merge that code, right? So from then on, it kind of goes and gets deployed the Dev sandbox, right, again from the SCM. And that’s a kind of a good place for you to run your static analysis, also run unit tests for all of the features as well as a newly added one and any component test if you have, and is also a very good place for leveraging shift left, right? But what I mean by that is your automation or testing teams, or QA teams would have built those build verification or smoke test, which they would run first when you put something in a QA environment to accept the builder rejected, so you kind of get that early feedback prior to even having them spend that time. So once all of that is done in a Dev sandbox, it’s kind of ready for it to be deployed right there to higher sandboxes, in this case as you can use it kind of like a QA sandbox. And typically this is usually somewhat controlled in an environment in most cases, because this would have various other integrations that you would typically need in an enterprise environment, usually in administrative environment. So some sort of quality gate or an approval required for you to deploy, but that can also be part of your pipeline and have visibility into whose approval it’s waiting, and once that is done, you can deploy again from your source code into this environment where you would be running your integration tests end-to-end functional tests and all that. And once that is completed, typically in a lot of cases, you would need to have some sort of performance and security checks done before you can actually deploy to production. And also in terms of testing your features on production like data also uncovers a lot of issues before you go to prod and it’s definitely a good practice to do. So that is typically handled in your staging or a pre-production sort of an environment. And once all that is done, a lot of times you may not necessarily deploy just directly to prod, in some cases you might be, or some cases that might be administered folks who’ll do it, but either way, the reference pipeline allows you to deploy. You have the ability to deploy if you would want to. And that’s kind of how the change propagates all the way from the developer machines to local and talk about like what different tests and at different stages and what quality gates and approvals. But all along this process, there would be notifications which are targeted based on the stage. It is and it would go to certain people, it could alert on like your Slack or HipChat or email, et cetera. And also you’d have some sort of dashboard as part of this reference implementation, which would be like a personal specific. So whatever is relevant for the Dev and QA teams, they would have that sort of information radiator, similarly for operations, folks there will be different sort of metrics and executives would have different metrics, right? So this is pretty much kind of like what a modern Salesforce development DevOps pipeline looks like. And we’ll definitely talk more about this, but I think it’s a good time, Rym, for us to bring up our first poll.
Manish Mistry: So, Vijay before we go there, I know you’ve been asked this question before.
Vijay Shyamasundar: Yeah.
Manish Mistry: You said this could be done with open source products as well as commercial products. What is your approach to determine whether to use open source or commercial products?
Rym Badri: Right. I think you know a lot of factors, right? I think you know some of this non-Salesforce specific tools, for example, like Jenkins or any other CI tool, which is not necessarily a Salesforce-oriented one, it will probably need some sort of a skill to get that all built into this, but for people who have been using some of the Salesforce specific automation tools they might be more familiar and they do bring some value adds in terms of managing this whole thing. So that is a little bit of a pros and cons with each of the approach. We can deep dive and look at the specific needs and maybe do recommendation sort of a thing as to what might be the best fit. All right. Hopefully you guys are looking at the questions on your screen and-
Manish Mistry: Yep. Let’s move on.
Vijay Shyamasundar: All right. Okay. I think you know when I showed that reference pipeline, you guys must be thinking that this all looks good, but it’s easier said than done and we couldn’t agree more, right? There is a lot of complexities and challenges involved. So what we will do in the next few slides is, we’ve kind of broken down this reference pipeline in different sections, and we’ll talk about … For each of the sections, we’ll talk about what some of the lessons learned have been for us and what are some of the best practices we’re following to overcome those. And we’ll first talk about the section here which is around enabling parallel development, right? I’m trying to move to the next screen, bear with me. Yeah. So I think you know in this case as the team size grows, the process and tool governance becomes a challenge. What I mean by that is, for example, at one of the travel and medical risks company that we work with, from a process perspective, some of the senior developers who are very good and familiar with the domain and the application, they would pretty much just merge directly to the master and without going through much of the process, but some of the junior developers would kind of raise a PR and someone senior or a lead would take a look and then do the merge. So it was not a standard process. And again, from tool perspective, they would run the static analysis in on like some of the developers would do it on the local environment and against their own configurations. It is not a centrally management control and definitely not a great practice because you are also not kind of leveraging the benefit of the tool completely and not getting the kind of visibility and reports from two developers is not comparing apples to apples, right? So those are some of the examples and in terms of several other cases where there could be different versions of the tool being used by different developers and things like that, which leads to those challenges, especially when you work with large teams and are 1500 or more sort of developers, managing this itself kind of becomes a challenge, right? So in another medical device company that we talked about, and I highlighted that the very regulated one, what we did there is from a process perspective, we created several standard templates for they use Jenkins there. We developed a lot of standard templates for the pipeline jobs, which had built-in inequality gates approvals and thresholds.
Vijay Shyamasundar: For example, in a feature branch pipeline if the unit test coverage dropped below the set threshold, they won’t even be able to do a pull request. So basically, that’s kind of like how we control them from the process perspective. Again, from tooling perspective we’ve created like some tool in a box sort of a platform, leveraging some contemporization so that they only have the kind of under specific version, the configs that you would want them to use, and that’s actually controlled by our center of excellence team. So the biggest benefits of these would be, like I would say, from a government perspective, you are not at the mercy of individual team members to follow instructions and guidelines, but you are actually enforcing them to use that, and not violate your standard operating procedures by controlling that through a tool and a process. And it has definitely significantly increased the developer productivity. And also the platform I talked about, the tool and about sort of concept. It really helps to onboard new team members much faster rather than them spending in cases like days to set up their environment. So those are some of the benefits of solving the challenges in the way that we just spoke. Let me move on to the next section now which is around the scratch org management. That also, again comes with its own challenges, that you guys are already familiar with some of the limitations in the number of scratch orgs you can use based on the type of the licenses you would have with Salesforce, right? So creation, shaping those orgs, having the necessary data in it, et cetera, is definitely a time consuming task, and also some of the housekeeping as deleting some of the active scratch orgs so others can use, though there is some admin sort of an overhead which is kind of like adding data into the development velocity, right? So, that is something that what we have done is automated. A DX allows you to do that in combination with Jenkins. We’ve been able to fully automate the scratch org creation and shaping and cleanup process, and also hydrating your org with necessary data. A lot of times what happens is your fix could be a very small. Developer spends there only a few minutes to fix it, but to actually test it, that’s where you will probably spend hours because you would have to create a lot of custom objects like parent and child objects, which is time consuming and that’s when you kind of get tempted to use another sandbox or deploy the change directly, which kind of breaks your process and leads to a lot of other problems. So I think to avoid those sorts of things, it’s a good practice to automate your scratch org creation and management and clean up, and also have some sort of ability to hydrate your orgs with the right test data that you need, so your developers are not focusing on spending time on creating those rather than writing code. So we’ll talk more when we do the demo. We’ll talk a little bit more about this. We’ll also talk more about the test data management in one of the future slides, but this definitely has helped again, in combination with the previous section we talked about, definitely helps with my developer productivity and speed to market and the efficient scratch org management. I think going on to the next section, we talk a lot about continuous testing in different environments, talk about doing a lot of functional tests or non-functional tests, and all of this definitely requires a lot of things. Is pretty complex. It requires you to have like a stable environment. To begin with, it will require you to have the types of data for different types of tests and it will require, from an execution perspective, you would need those execution environment, you would need the scripts, you’d actually have to write those scripts, ensure they’re maintainable. As your application changes, they are continually kept up to date. And also from an overall test management perspective all the way from your requirement to test traceability, what are the right set of tests I should be running? And also in some cases with large financial banks, sort of customers we have worked with, it’s not even an option to execute all of the tests. Like if you want to regress something, they’re like quarter-million test cases, right? So you will have to bring in some sort of intelligence to see what is some sort of a scientific model for your risk-based approach, to say, these are the tests that I want to run in a meaningful way to get that fast feedback. So how do we address all of those errors? That’s definitely one of the big beasts, right? Basically, what we have done is, in a lot of cases, for example, especially like the topic of test data, which is definitely a significant problem, I’m pretty sure most of you may agree to that, right? With one of the healthcare customer that we work with, they make a blood glucose meters and they are a company in applications and web portals, which kind of work in coordination with the glucose meter and they cater to about 26 different locales. And the best part is they have automation to test in all of the 26 languages, but in reality, they could only test in two languages prior to a release. And the reason for that is like, it takes way too much time for them to create test data in all of those different locales. So basically, even though there was automation, you couldn’t leverage the full benefits because you didn’t have a good efficient way of managing the test data. And when that really became a problem, because we hadn’t tested in a certain type of data and certain type of locale, and that issue happened in production, this kind of became a very big issue and we kind of solved that now leveraging PDM. So now if an issue occurs in production, we have the ability to actually kind of create very similar data in a lower environment, actually a similar data in the lower environment and you have to have the ability to kind of reproduce that. So that is one aspect. Again, from perspective is definitely a lot more complex. There is, like I said earlier, when you have large test packs or thousands of tests, there is no point in running your CI regression pipelines if it’s going to take overnight or a day to get the feedback. It doesn’t befit the purpose. So how do you set up those parallel testing, like say, orchestrated with your CI tool? You’ll probably need some sort of worker nodes, you would need different configurations of the machines, from machine specs perspective, you would need different tooling to run different types of tests. So basically these are all very time-consuming processes, right? And also to launch your IOT, for example, if it’s a web-based application to launch your IOT browsers, you want to test on different browser combinations, these are some of the complexities which is kind of like easier said than done. So what we have done for that medical device company I spoke about earlier from an execution perspective is, we’ve created certain container templates, images which kind of gets instantiated through our pipeline whenever you’re running the particular set of tools. So, they don’t have to spend time creating those from scratch. And most of the times once that initial work is done, you’re not changing your automation frameworks every day. So that is not like a significant maintenance effort, but it really helps speed up your execution. We also created a Kubernetes cluster kind of like it’ll spawn your browser porch, especially in their case, it was Chrome and Firefox. So we continually have the ability to do like an end user type of a testing in those with the Selenium grid and we could do a lot of those things in parallel, and UT is once you’re done, you tear down that environment and it’s only spun up when you need, and it also helps with the cost savings as well. And from a test management perspective, there are a lot of tools out there which integrates with your ALM tools like JIRA that’s kind of like where we achieve those traceability and the execution runs, et cetera. And typically it’s integrated with your automation framework. So you have the ability to trigger all that. So you get the 360 degree view, and from a framework perspective, automation, there are several automation frameworks, but in that case we use one of our homegrown frameworks, it’s called Quality Automation Framework. Definitely, pretty much built on top of Selenium, Appium sort of thing. So that helped us create the test cases in such a way that it’s easy to maintain and execute. It comes with a lot of out of box integrations with most of the tools in the Salesforce ecosystem, and we have some accelerators for lightning and plastics sort of component. So it helps with the creation of the scripts as well. So CT definitely is a whole topic in itself, but the idea of your continuous testing is pretty much how do I bake in my automated tests into the pipeline so I get faster feedback, and kind of have an assessment on the business risk for the software that kind of ready to go out, that is pretty much the whole idea of CT. And like we talked earlier, there are several ways to do it, but these principles definitely would align irrespective of whatever tooling or the platform you would choose. So that is pretty much kind of like the section that we wanted to talk about, and there are several other implementation best practices, which is like I talked about earlier, you integrate with Slack or HipChat or email. This is significant, for example, in several places like a lot of times you would have to … People who have to approve they won’t even know if it’s somewhere in the email or somebody has to call them and let them know. But in a lot of cases it’s a simple tool, but they have the ability to just approve from Slack. They get a notification irrespective of wherever they are, they have the ability to just click a button and that is integrated with your pipeline and it’s approved and it moves forward. So those little things, instead of waiting for that person to be in the office, it may have been like at night, but you already have things completed before the teams come in. Those significantly help. And those notifications, we need to ensure that it doesn’t spam. It has to be targeted for specific personas based on the section of your pipeline, that different notifications get triggered to respective people. And quality gate and approval, Like we talked about in typically you can have this mostly visually built in the pipeline itself. So kind of like a one stop view for everybody who wants to see where the particular feature is or particular application is. It kind of shows where things are, and the person has specific dashboards definitely helps to kind of, if we can’t measure what we’re doing, it’s really hard to improve. So I think that comes in really handy. And I also like the culture aspect that Manish was talking about. We’ve seen significant improvement with these dashboards in a lot of places I’ve been part of and they’re kind of displayed on large monitors and it’s very transparent to the team. So that kind of brings a culture of not letting something left failing in a particular ranch for a long time. And definitely, there are a lot of gamification aspects also around it to help with that culture aspect. And I think this is all the holistic solution would look like. We’ve talked about various different pieces, but if you see kind of on top of the reference pipeline how you can improvise or bring more efficiencies with different things we talked about, as a tool in a box sort of thing for the developer platform, leveraging your environment management, leveraging DX and an automation tool, and also your continuous testing sort of enablement and maintenance, creation and maintenance to get the 360 view from a quality perspective, I think that’s pretty much what I wanted to talk about from a reference pipeline. At various stages we spoke a lot about that medical device company. So, we started off with that traditional and we started off with them about a few years ago now. They were following a lot more of a traditional sort of a model and we have now been able to kind of do mostly like a 100% automation from code to deploy. Like I said, that’s a very controlled environment. We don’t deploy to production, but we do have the ability if we want to. But for a lot of the lower environments, it is fully automated. And from a test automation perspective, we’ve been able to achieve a 100% automation for API and for web and mobile, it’s about 90%. And the biggest benefit I think is the cycle time and it’s kind of slashed in half than what it used to be before. Manish already talked about the resource saving for this, and another big benefit is we could control the lot of defects leakage to production or even to higher environments as well. That is kind of a quick summary of that particular case study. I think before we jump into the demo, Rym, if you want to bring in that second point, the final poll that we have for the webinar, we’ll give a couple of minutes and then jump into the demo.
Rym Badri: Yeah. So at this time you should be able to answer the questions in this poll at the bottom of your screen. So we have about three questions for this one. Number one, how frequently do you deploy to production, either daily, every two weeks, every four to six weeks or quarterly? The second question, how many environments sandboxes do you use in your SDLC? None, one to two, two to four or more than four? And what portion of your testing is automated? None, some or most? We will allow a few minutes to get your answers for this poll. I see that we have about 50% completion rate for this one. We will allow a few more seconds. A few more answers coming in. Okay. So we have almost 60% completion. The results should be able to appear on your screen in just one second. All right, Vijay. Feel free to continue.
Vijay Shyamasundar: Perfect. Thank you everyone for taking the time to answer our polls. All right, so I think we’ll jump into the demo section right now. I just wanted to call out a disclaimer here. This demo would actually take a lot more time if I showed it completely live. So what I did is, prior to this webinar, I did a live one and I took a certain portions of it, and that’s kind of what we’re going to look into in the next few minutes. All right. So let me go ahead and show you. So basically, this is a sample application. We built this primarily for demo purposes to showcase how your change propagates all the way from your development environment to production, and the application itself is not so much of interest here. So this is a reject, basically existing feature on production. Like I said, salespersons can come and move some of the leads to opportunities, et cetera. And let’s say for the purposes of this demo, we’ve been asked to add a widget under this to maybe a slider sort of a thing to showcase maybe the probability of that opportunity conversion. So typically what would happen is like no developer would create a feature branch from the master or from the release and then add the code and the corresponding tests for that feature that he would developed. And once he has done all of that in his IDE, basically he can go ahead and trigger this pipeline to test this in a scratch org. So ideally, there are a lot of web hooks available, so this pipeline would trigger as soon as a commit happens to a particular feature branch and this feature branch pipeline would auto trigger, but for the purposes of this demo I’m just triggering that manually. As you can see, there’s really checked out the source code authorized to the DevHub and basically a developer is not worried about creating or deploying to that scratch org or even deleting that from a housekeeping perspective/ All of this is done through a combination of DX and of course, we have used Jenkins in this case. So basically, this pretty much frees up your developer so that he can focus more on writing code and he’s not worried about anything else, and that’s the best productivity that you want out of your developers and have this repetitive mundane stuff being taken care of by this automated pipeline jobs. So now, once a developer is … Basically the tests run successfully, like I said, it would have some of the quality gates, et cetera. So typically what happens is, he would go into an ACM, in this case it was Git. He would go and create a new pull request and a lead would kind of review that, and if all looks good, he would merge that to the master branch. And then that’s what triggers this muster branch pipeline. Basically, oops, sorry. All right. So that’s kind of what triggers your master branch pipeline and we should be getting to that in a minute. I apologize for we doing that. All right. So this, basically, if you see, again, any commit to the master branch would trigger this pipeline. Again, it does the same things. It checks out your source code authorized to the orgs. Again, like we talked about the centralized static analysis, it will run against that centrally controlled instance, convert your source to the metadata, and again, it would have to authorize to your particular sandbox that you are going to deploy to, in which case it could be like a dev sort of a sandbox. And once that is done, the deployment happens. You bake in your automated tests in there. In a minute, you should see that browser coming up. Typically, automation is a little faster for the human eye. You may not see the entire thing, but it’s testing the widget that I showed earlier, where you move the leads to opportunities, et cetera, and it also does some assertions and it has some positive, negative boundaries sort of tests. And once the passing tests are completed, you have the ability to deploy to production, but the quality gate that I talked about is what we’ll see on the screen. So typically, it built into the pipeline itself. Like I was mentioning earlier, if somebody wants to know who’s in approval it’s waiting, or who rejected it, it’s all part of this single pipeline view. And once that is approved, it will go ahead and deploy to production, and you may have some sort of read-only sort of, or monitoring sort of tests that you would want to run in production as well. You can make that as part of the pipeline so you’ll get feedback and some of the continual monitoring things may remain and run a certain scheduled intervals or, however you have done it. And typically, it also kind of incorporated a Slack for notifications here at every stage. You either get it in your email or Slack. So that’s pretty much kind of like a quick demo, basically showcasing how you would propagate a change all the way from the developer local machine and how it would flow to various different branches on the version control and how that is related to the deployments to different environments and how we could get into production with necessary quality gates approvals and the notifications in place. So that’s pretty much what we had from a demo perspective. And I think this one would be the last slide that I have to talk in terms of what infrastructure has been of or where infrastructure has been of help to several enterprises. You’ve done a lot of strategy and assessment from DevOps and continuous enablement, continuous testing enablement perspective, and a lot of enterprises have also leveraged us to kind of clear their backlog, especially if they’re on the DevOps journey they want somebody to come and clean up a lot of their manual tests and convert them to automated ones. A lot of times we have been asked to come in to do framework and tooling, sort of comparison or the suggestion sort of a thing. And we built a lot of functional performance and non-functional tests, mostly all automated. And we do have a lot of pre-built pipeline as code sort of thing, especially customized for Salesforce CI-CD. A lot of standard templates or flow libs and such. So I think an automation framework and all, and we also have like a right set of people who have been there done that, especially in the Salesforce ecosystem when it comes to DevOps, continuous testing and automation. And pretty much if there is, I will definitely take questions. If you have any further questions, feel free to reach out to us and we should be able to answer them and connect with you for next steps. So again, I wanted to thank everyone for taking the time to come to our webinar today. I think now we’ll-
Rym Badri: At this time, as Vijay just mentioned, we would like to get to your questions and get them answered. So, as a reminder, at any time, feel free to ask a question using the chat or Q and A function at the bottom of your screen. And there has been a few questions during the webinar, and I see one here for Manish already, and it says, data is a real challenge for us. You talked about TDM, is it a product? Can we plug it into our ecosystem? Manish, how do you think?
Manish Mistry: Thanks, Rym. So, and test data is indeed a real problem for pretty much every effort we have been involved in. And the reason it’s a problem is that customers end up spending significant amount of time actually keeping the test data relevant to the system that they are testing. I mean, as we know that that system on that test continuously changes, and if you are to manually keep up with it, that becomes a problem. So in short, test data management is a module that Apexon has developed, and fundamentally what it does is it keeps test data current with the current state of system on the test. So essentially, by a little bit of code, but fundamentally a lot of pre-created hooks and queries that we have developed for Salesforce typical module, it would be able to pull the right test data to use for a specific test case. And of course, this is a framework module, so it doesn’t work out of the box, but once we configure it with the right queries for all the forthcoming times, it will automatically pull the right test data into it from the system on that test such that you don’t have to manually enter these automations tests.
Rym Badri: Thank you.
Manish Mathuria: that’s how I … Yeah.
Rym Badri: Thank you. Now, there seems to be a second question for Vijay this time. We have a lot of existing code and customization. How can we convert them to DX projects, especially in terms of modularization?
Vijay Shyamasundar: Sure. I think that’s a very hot topic that a lot of people are in the journey today. Salesforce itself offers us first party tools, like through DX you have the ability to convert, but it’s not that quite straightforward, especially when it comes to modernization. I kind of think of this as like, how you would go from a monolithic to a micro services migration in your customer application world, leveraging like triangle pattern sort of thing, where you take a particular chunk of functionality, you externalize that entire module and keep repeating it. I think that’s the best way to move towards, especially when you’re in a really complex environment and large code base that could be like the low risk and the right option that most enterprises follow.
Rym Badri: Great. Thank you. There seems to be one more question here for Manish. You talked about your continuous testing enablement solution. What framework do you use for that?
Manish Mathuria: We use any framework that is … So we have our own framework, but the answer largely depends on what are the underlying test tools that we are using. We use both open source tools, as well as commercial tools to do test automation where then the answer to the framework part depends on what tool we are using. So there is for sure a framework to go with every tool choice that we have come across, and that framework group sort of comes pre-built to you in a way where you’re not like re-engineering that wheel while we are trying to solve that problem.
Rym Badri: Perfect. Thank you. And there seems to be another question for Vijay related to the demo this time. Were there any commercial tools use in your demo today? We use Gearsets. Is the reference solution shown still applicable?
Vijay Shyamasundar: Yeah. So I think in the demo, we pretty much used all open source tools today. We haven’t used any commercial tool, but like I said earlier, Manish and I had a quick chat around that topic as well. There are several commercial options as well, which would pretty much allow you to follow the same reference pipeline, but in today’s demo, we used all open source.
Rym Badri: Sounds good. Thank you. So I believe those are the constraints of time. That’s all the questions we’re able to answer today. Also, be sure to check out and subscribe to our DTV, additional transformation channel that brings up industry experts for thought leadership videos. And again, many, many thanks to our speakers Manish and Vijay, and thank you everyone for joining this webinar today. You will receive an email within the next 24 hours with the slide presentation used and a link to the webcast for the replay. If you have any questions that we weren’t able to cover today, please contact us at email@example.com or you can reach out to our speakers directly. You can see their emails on the screen and thank you all and enjoy the rest of your day.