6 ways to reduce the risk of use without adding cost
Anyone can push code to production more often, but the trick is to do it without hurting users or increasing costs.
The drum beat of the age is more software, faster - that includes more frequent use. The classic software literature states that all uses should be fully tested. This will either increase the cost of testing or reduce the coverage of the test.
We will discuss other ways to reduce the risk of use.
SEE: Policy pack: Ethics in the workplace (TechRepublic Premium)
When the programmers at Etsy and IMVU created continuous use, they learned to break the application into small pieces. Etsy, in particular, used PHP. As long as the programmer did not modify the code library or database, they could use one web page and only that web page. It could be argued that this eliminated the need to try the whole system as an end-to-end system.
Today's systems are more likely to be made up of micro services combined with a static front end. Set the free programmers just to use their code frequently by forcing the subsystems to use independently. At the same time, expect the programmers to support that code in production.
To support their own code, teams seem to need advanced analysis.
It was Ed Keyes, back in 2007, who first said that "Advanced Study is fairly inseparable from experiments." Surprisingly, Ed made that claim a year before anyone was termed the term "DevOps." spoken above.
The classic implementation of analysis is that the “works” do something, mainly looking at second-order effects: CPU, memory, and disk usage. More advanced analysis is about capturing the user experience - the number of 400-series web errors, how long the pages sit on the server, total values embedded in web pages (by counting per minute), and so on. Think of each member of a development team “looking at the analysts” as they make an independent distribution. If a programmer spreads out a web service and the numbers work unexpectedly, such as if 404 errors are spinning, the programmer can reverse the change. The length of time a beast stays on production varies from two weeks (under a scrum team) to two hours, without adding much cost.
Perhaps the best study is to take some of these key automated scripts, cut them down to size, and run them in production.
Continuous testing of productivity
Synthetic business is a flexible term for conducting experiments, running them in production, and monitoring the results. That may be a complete user journey, from account creation to login, checkout, all the way to checkout. Imagine accomplishing that task, over and over again, all the time, in production. You may be skipping a checkout; you may be logging in continuously. Then add code to keep track of how long an activity takes. When people complain about login speed, you have real data about how long the experience takes, not just how long things take the server.
Best of all, you can probably do that by reusing test tools. That's not entirely free, but it's a fraction of the cost, and you can do it for a fraction of the features - just the "hotspots" that are often the problems or the main features.
Feature flags and canary distribution
When I refer to quick practice and research, I also emphasized the need for a quick rollback. Feature flags push features to configuration, making them easy to turn on and off. Canary distribution distributes a feature to a small percentage of the user base. This group could be internal users or “power users,” people who want features but are at risk of some damage, and who are committed to the product. Adding a new feature to canary users allows them to complain if they find a problem - similar to the canal in a coal mine.
Although the initial implementation of flags caused a more complex code feature, where each flag required a "if" statement and two different code blocks, this is not necessarily true. today. Asa Schachar, a developer candidate at Optimizely, suggests that a system designed as flags can push features that are on or off to some resolution, reducing unintentional complexity.
Encourage and lead
He persuaded David Hoppe, a senior developer with Ittentional, to point out what is obvious. As he said "How about helping the team care about the product?"
Unless the team cares about something: The consumer, the product, even the pursuit of excellence itself, none of the above methods is likely to have much of an impact. Personally, the consultation specifications that I have been so pleased with, the ones where the long-term impact was greatest after I left, were all involved in helping the team to generate their own continuous improvement - something I call the "snowball skill."
Certificate for procedures a
The good news for the above methods is the presence of data. A few years ago Nicole Forsgren began a research effort that would become the state of the DevOps report. In that project, she conducts an annual study that looks at how organizations structure the work, how they perform, and draw relationships. Forgren published the results of her 2017 study in the book Accelerate: The Science of Lean Software and Devops.
Unsurprisingly, most of my suggestions here also appear in the book, but without encouraging and guiding them, perhaps because it is so difficult to quantify. I've worked with teams that admit they enjoy going home at five o'clock and getting the meaning from their families. What people do not admit is that they are not worried - even when their actual behavior seems to be designed to reduce efficiency.
So the snowball picked up productivity.