The vimrc file contains optional runtime configuration settings to initialize Vim when it starts. We can customize Vim by putting suitable commands in your vimrc. We can locate the file in the home directory.
There are some very complex configurations we can do in the vimrc file, but I am going to show just a few simple ones because I usually use IDEs (IntelliJ, Eclipse, Netbens, …) or text editors (Sublime, Notepad++, …) to write my code and I just use Vim when I am connected to a remote server through SSH or locally for a very simple changes in configuration files or similar.
An example of vimdc file:
"Avoid console bell when errors
"Language for messages
"Syntax with colors
"Parenthesis, brackets and curly brackets matching
"Use precedent indentation
"Ignore case except uppercase string
set ignorecase smartcase
"Mark search results
Once installed, you can “tap” the Cloud Foundry repository:
brew tap cloudfoundry/tap
Finally, you install the Cloud Foundry CLI with:
brew install cloudfoundry/tap/cf-cli
Once you have installed the CLI tool, you should be able to verify that it works, by opening a Terminal and running:
This should show something like:
cf version 6.30.0+decf883fc.2017-09-01
If you see a result like this, the CLI is installed correctly and you can start playing.
Now, we need a trial account in a Cloud Foundry provider. There are multiple option we can check in the Cloud Foundry Certified Providers page. Once we have created the account we can proceed to login with out CLI.
API endpoint> https://api.eu-gb.bluemix.net
Targeted org example
Targeted space dev
API endpoint: https://api.eu-gb.bluemix.net (API version: 2.75.0)
In the above output, the email is the address you used to sign up for a service.
Once you have successfully logged in, you are authenticated with the system and the Cloud Foundry provider you use knows what information you can access and what your account can do.
The CLI tool stores some of this information, the Cloud Foundry Endpoint API, and a “token” given when you authenticated. When you logged in, instead of saving your password, Cloud Foundry generated a temporary token that the CLI can store. The CLI then can use this token instead of asking for your email and password again for every command.
The token will expire, usually in 24 hours, and the CLI will need you to login again. When you do, it will remember the last API Endpoint you used, so you now only have to provide your email and password to re-authenticate for another 24 hours.
cf help: Shows CLI help.
cf help <command>: Shows CLI help for an specific command.
cf <command> –help: Shows CLI help for an specific command.
cf help -a: Lists all the commands available in the CLI.
Microservices is an area that is still evolving. There is no standard or reference architecture for microservices. Some of the architectures available publicly nowadays are from vendors and, obviously, they try to promote their own tools stack.
But, even, do not having an specific standard or reference we can sketch out some guidelines to design and develop microservices.
As we can see, the capability model is main splitted in four different areas:
Core capabilities (per microservice).
Process and governance capabilities.
Core capabilities are those components generally packaged inside a single microservice. In case of microservices and fat jars approach, everything will be inside the file we are generating.
Service listeners and libraries
This box is referring to the listener and libraries the microservice has in place to accept service requests. The can be HTTP listeners, message listeners or more. There is one exception though, if the microservice is in char only of scheduled tasks, maybe, it does not need listeners.
Microservices can have some king of storage to do properly their task, physical storages like MySQ, MongoDB or Elasticsearch, or in-memory storages, caches or in-memory data grids like Ehcache, Hazelcast or others. There are some different storages but, it does not matter what type of storage is used, this will be owned by the microservice.
This is were the business logic is implemented. It should follow tradicional design approaches like modularization and multi-layered. Different microservices can be implemented in different languages and, as a recommendation, they should be as stateless as possible.
This box just refers to the external APIs offered by the microservice. Both included, asynchronous and synchronous, been possible to use technologies from REST/JSON to messaging.
To deploy our application and for the application to work properly we need some infrastructure and infrastructure management capabilities.
For obvious reason, microservice architectures fit more in cloud based environments that in tradicional data center environments. Things like scaling, cost effective management and the cost of the physical infrastructures and the operations make in multiple occasion a cloud solution more cost effective.
We can find different providers like AWS, Azure or IBM Bluemix.
There are multiple options here and, obviously, container solutions are not the only solutions. There are option like virtual machines but, from a resources point of view, the last ones consume more of them. In addition, usually it is much faster to start an instance of a new container than to start a new virtual machine.
Here, we can find technologies like Docker, Rocket and LXD.
One of the challenges in the microservices world is that the number of instances, containers or virtual machines grows adding complexity, if not making it impossible, manual provisioning and deployments. Here is were containers orchestration tools like Kubernetes, Mesos or Marathon come quite handy, helping us to automatically deploy applications, adjust traffic flows and replicate instance among other.
They are not related with the microservices world but they are essential for supporting large systems.
The service gateway help us with the routing, policy enforcement, a proxy for our services or composing multiple service endpoints. There are some options one of them is the Spring Cloud Zuul gateway.
Software defined load balancer
Our load balancers should be smart enough to be able to manage situations where new instances are added or removed, in general, when there are changes in the topology.
There are a few solutions, one of them is the combination of Ribbon, Eureka and Zuul in Spring Cloud Netflix. A second one can be Marathon Load Balancer.
Central log management
When the number of microservices grow in our system the different operations that before were in one server now are taking place in multiple server. All this servers produce logs and to have them in different machines make quite difficult to debug errors sometimes. For this reason, we should have a centrally-managed log repository. In addition, all the generated logs should have a correlation ID to be able to track an execution easily.
With the amount of services increasing the static service resolution is close to imposible. To support all the new addition, we need a service discovery that can deal with this situation in runtime. One option is Spring Cloud Eureka. A different one, more focus in container discovery is Mesos.
Monolithic applications were able to manage security themselves but, in a microservices ecosystem we need authentication and token services to allow all the communications flow in our ecosystem.
Spring offers a couple of solution like Spring OAuth or Spring Security, but any single sign-on solution should be good.
As we said int he previous article, configurations should be externalized. It is an interesting choice set in our environments and configuration server. Spring, again, provides Spring Cloud Config but there are some other alternatives.
There is need to remember that now, with all this new instances, all of them scaling up and down, environment changes, service dependencies and new deployments going on, one of the most important things it is to monitor our system. Things like Spring Cloud Netflix Turbine or Hystrix dashboard provide service-level information. There are some other tools that provide end-to-end monitoring like AppDynamic or NewRelic.
It is recommended the use of some dependency management visualization tools to be aware of the system complexity. They will help us to check dependencies among services and to take appropriate design decisions.
As we have said before, each microservice should have each own data storage and this should not be shared between different microservices. From a design point of view, this is a great solution but, sometimes, organizations need to create reports or they have some business process that use data from different services. To avoid unnecessary dependencies we can set a data lake. They are like data warehouses where to store raw data without any assumption about how the information is going to be use. In this way, any service that needs information about another service, just needs to go to the data lake to find the data.
On of the things we need to consider in this approach is that we ned to propagate the changes to the data lake to maintain the information in synch, some tools that can help us with this is Spring Cloud Data Flow or Kafka.
We want to maximize the decoupling among microservices. The way to do this is to develop them as much reactive as possible. For this reliable messaging system are needed. Tools like RabbitMQ, ActiveMQ or Kafka are good for this purpose.
Process and governance capabilities
Basically, how we put everything together and we survive. We need some processes, tool and guidelines around microservices implementations.
One of the keys about using a microservice oriented architecture is been agile, quick deploys, builds, continuous integrations, testing… Here is where a DevOps culture come handy in opposite to the waterfall culture.
Continuous integration, continuous delivery, continuous deployments, test automation, all of them are needed or at least recommended in a microservices environment.
And again, testing, testing, testing. I cannot say how important in this, now that we have our system splitted in microservices the need to use mocking techniques to test, and to be completely confident, we need functional and integration tests.
We are going to create containers, in the same way we need a repository to store the artifacts we build, we need a container registry to store our containers. There are some options like Docker Hub, Google Container Repository or Amazon EC2 container registry.
Microservices system are based on communication. Communication among microservices, calls to APIs offered by this microservices but, we need to ensure that people that want to use our available APIs can understand how to do it. For this reason is important to have a good API repository:
Expose repository via a web browser.
Provide easy ways to navigate APIs.
Possibility to invoke and test the endpoint with examples.
For all of this we can use tools like Swagger or RAML.
Reference architecture and libraries
In an ecosystem like this the need to set standard, reference models, best practices and guidelines on how to implement better microservices are even more important than before. All of this should live as a architecture blueprints, libraries, tools and techniques promoted and enforced by the organizations and the developer teams.
I hope that after this article, we start having a rough idea about how to tackle the implementation of our systems following a microservice approach. In addition, a few tools to start playing with.
Note: Article based on my notes about the book “Spring 5.0 Microservices – Second Edition”. Rajesh R. V
Cloud computing is one of the most rapidly evolving technologies. It promises many benefits, such as cost advantages, speed, agility, flexibility and elasticity.
But, how do we ensure an application can run seamlessly across multiple providers and take advantage of the different cloud services? This means that the application can work effectively in a cloud environment, and understand and utilize cloud behaviors, such as elasticity, utilization-based charging, fail aware, and so on.
It is important to follow certain factors while developing a cloud-native application. For this purpose, we have The Twelve-Factor App. The Twelve-Factor App is a methodology that describes the characteristics expected in a modern cloud-ready application.
The Twelve Factors
This factor advices that each application should have a single code base with multiple instances of deployment of the same code base. For example, development, testing and production. The code is typically managed in a VCS (Version Control System) like Git, Subversion or other similar.
All applications should bundle their dependencies along with the application bundle, and all of them can be managed with build tools like Maven or Gradle. They will be using files to specify and manage these dependencies and linking them using build artifact repositories.
All configurations should be externalized from the code. The code should not change among environments, just the properties in the system should change.
IV. Backing services
All backing services should be accessible through an addressable URL. All services should be reachable through a URL without complex communications requirements.
V. Build, release, run
This factor advocates strong isolation among the build stage, the release stage and the run stage. The build stage refers to compiling and producing binaries by including or assets required. The release stage refers to combining binaries with environments-specific configuration parameters. The run stage refers to running applications on a specific execution environment. This pipeline is unidirectional.
The factor suggests that processes should be stateless and share nothing. If the application is stateless, then it is fault tolerant and can be scaled out easily.
VII. Port binding
Applications develop following this methodology should be self-contained or standalone and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port, and listening to requests coming in on that port.
This factor states that processes should be designed to scale about by replicating the processes. What it means, just spinning up another identical service instance.
This factor advocates to build applications with minimal startup and shutdown times. This will help us in automated deployment environments where we need to bring up and down instances as quickly as possible.
X. Dev/Prod parity
This factor establish the importance of keeping the development and the production environments as close as possible. Maybe to save costs, no the local environments where developers write their code, here they tend to run everything in one machine but, at least, we should have a non-production environments close enough to our production environment.
This factor advocates for the use of a centralized logging framework to avoid I/Os in the systems locally. This is to prevent bottlenecks due to not fast enough I/Os.
XII. Admin processes
This factor advices you to target the same release and an identical environment as the long running processes runs to perform admin tasks. The admin consoles should be packaged along with the application code.
I recommend you to read carefully the The Twelve-Factor App page and its different sections.
As a developers an important part of our job sometimes is to fix problems in the different environments where our applications are deployed. Usually, this means to deal with huge log files to find where errors occur, and their stacktraces to add some context to the problem. The problem is that usually log files are verbose and contain a lot of information.
A couple of useful command to deal with this can be:
Both have the same purpose the only difference it that “grep” works with normal files and “zgrep” works with compressed (.gz) files. Usualy files are compressed due to the logs rotation scheduled in the servers. Both commands have multiple options and flags but, I am going to expose here two flags that have been useful multiple times:
-E expr: Allow as to supply a pattern for the search.
-C num: Print numlines of leading and trailing output context.
–color: Shows the matched information in color in the terminal.
Nowadays we are use to deploy code in the cloud and to have all our machines and servers in cloud environments. All of this, it has even made more important the use of ssh to connect remotely to our servers allocated in the cloud.
I have written multiple times in my console the commands to connect to one server or another but, as every developer, I am lazy and I try to simplify my life. In this case, we can do this with a simple lines in a couple of files:
~/.ssh/config: We are going to configure the machines we want to connect or tunnels we wnat to create.
~/.bashrc or ~/.bash_profile: Create some alias to easily connect to our servers
SSH config file
Server to connect
# MyServer-1 - myDb
LocalForward 3307 myserver1.myorg.com:3306
Bash Config file
alias myserver1="ssh myServer1"
alias myserver1db="ssh myServer1Db"
After this, it will be enough to connect to our remote servers with executing our aliases in our console. No more remember commands.