In a company that was used to test the features in a more manual way, the continuous growth and the evolution to continuous delivery makes that the test mindset had to change. The way we were used to perform testing changed to adapt to the new needs of Continuous Delivery, where we need to be fast to validate the increasing number of builds while not losing confidence on the quality of the delivery.
We adopt the "shift left" strategy to testing and increase automation to fit to the CD strategy with a series of pipeline stages where we exercice different parts of the software increasingly so that when we reach the end we have very few validations to be made manually.
Starting to think on which types tests I should use for one given User Story and in wich levels of the pipeline should I use them introduce a challenge for our testers that were not used to think on testing this way.
So if you want to know the kind of gear you need to climb the testing pyramid in a CD environment keep in touch :)
2. Cristiano Cunha
Started as a developer
Worked in MobiComp, Maincheck, Blip and
Farfetch
Work in tests for 7 years
Lead Automation Tester @Farfetch
Who am I?
9. Continuous Delivery
Building software in such a way that
it can be released to production at any time.
Martin Fowler
... Safely and quickly in a sustainable way.
Jezz Humble
10. What major changes are needed?
• Everything is in source control, E V E R Y T H I N G!
• Everyone commits to main
• Retrocompatibility
• Independence of applications
• Build generated automatically
• Build deployed automatically
• Reliable and stable environments:
• Closest to live as possible (in terms of architecture, applications versions, configurations and databases)
• Closed, nobody can do manual changes or access these environments
• Dedicated, while the validation of a new application
• Stable, possible to revert changes in a heartbeat
• Defined process of update for applications, data and other software
• Automated tests
• Easy ways to understand what went wrong (reports, logs, monitorization)
• Mindset change across all teams
26. Ready to climb?
Thank you!
C r i s t i a n o . c u n h a @ f a r f e t c h . c o m
h t t p s : / / p t . l i n k e d i n . c o m / i n / c r i s
t i a n o - c u n h a - 3 3 5 2 8 2 2
@ M e l i o t h
Editor's Notes
Utilizamos metodologias ageis, já ha alguns anos, em que temos
- 2 week sprints
- Stand up – SCRUM
- User story
- Story points
- All other activities like planning, grooming, etc
Teams
Repositório de código
Estrátégia de ramos
Ora com várias equipas o merge de código explode em complexidade e em tempo
Regressions tests delayed due to merge complexity and take more and more time
Manual vs automation
We had very few automated tests and our tests were mainly manual, with our constant growth it started to take almost three days to validate an application.
Manual deploy
Once it was validated we had to rely on the availability and skill of our operations team to do the deployments of the application manually, that were time consuming and subject to error
did not cope with the demands of environments updates and production deployments
Faster and more frequent delivery – fail faster Entregas mais frequentes e mais rapidas – falhar mais rápido para termos feedback mais rapido
Validações automaticas e rapidas de cada versão.
Processo de release monotono para libertar as pessoas para fazer o que relamente importa. Repetir o processo muitas vezes permite chegar a perfeição.
The natural evolution for us was to go for Continuous Integration, as so we define a set of small steps that will lead us to this goal, in Farfetch we are all about small victories towards the end objective, this allows us to correct the shot and adapt to change
Define target and share
Everything is in source control, E V E R Y T H I N G!
Everyone commits to main
Retrocompatibility
Independence of applications
Build generated automatically
Build deployed automatically
Reliable and stable environments:
Closest to live as possible (in terms of architecture, applications versions, configurations and databases)
Closed, nobody can do manual changes or access these environments
Dedicated, while the validation of a new application
Stable, possible to revert changes in a heartbeat
Defined process of update for applications, data and other software
Automated tests
Easy ways to understand what went wrong (reports, logs, monitorization)
Mindset change across all teams
define standards and a backbone regarding automated tests tools and structure
Automation
Performance
Security
Definir desde o inicio qual o papel destas equipas e de que forma o vão desempenhar
PO
An example context for this is integration. Most programmers learn early on that integrating their work with others is a frustrating and painful experience. The natural human response, therefore, is to put off doing it for as long as possible.
The rub, however, is that if we were able to plot pain versus time between integrations, we'd see a graph like this
If you have this kind of exponential relationship, then if you do it more frequently, you can drastically reduce the pain.
This idea of doing painful things more frequently crops up a lot in agile thinking. Testing, refactoring, database migration, conversations with customers, planning, releasing - all sorts of activities are done more frequently.
Component testing allows individuals the opportunity to combine all of the units within a program and test them as a group.
This testing level is designed to find interface defects between the modules/functions. This is particularly beneficial because it determines how efficiently the units are running together.
Keep in mind that no matter how efficiently each unit is running, if they aren’t properly integrated, it will affect the functionality of the software program. In order to run these types of tests, individuals can make use of various testing methods, but the specific method that will be used to get the job done will depend greatly on the way in which the units are defined.
Mocking is primarily used in unit testing. An object under test may have dependencies on other (complex) objects. To isolate the behavior of the object you want to test you replace the other objects by mocks that simulate the behavior of the real objects.
This is useful if the real objects are impractical to incorporate into the unit test.
In short, mocking is creating objects that simulate the behavior of real objects.
System testing is the first level in which the complete application is tested as a whole.
The goal at this level is to evaluate whether the system has complied with all of the outlined requirements and to see that it meets Quality Standards.
System testing is undertaken by independent testers who haven’t played a role in developing the program. This testing is performed in an environment that closely mirrors production. System Testing is very important because it verifies that the application meets the technical, functional, and business requirements that were set by the customer.
The final level, Acceptance testing (or User Acceptance Testing), is conducted to determine whether the system is ready for release.
During the Software development life cycle, requirements changes can sometimes be misinterpreted in a fashion that does not meet the intended needs of the users. During this final phase, the user will test the system to find out whether the application meets their business’ needs. Once this process has been completed and the software has passed, the program will then be delivered to production.
Instead of thinking in making tests to validate an user story, we start to think how can we divide the tests throuhout those levels to have a fast feedback on the build status and still be confident that the quality is the one expected to launch a