AWS CDK Opinionated Pipeline – Where is What?

Background

You use AWS CDK. It’s great. It does a lot for you. Then one day something goes wrong. OK, it didn’t happen yet. But you want to be prepared for that (at least to some extent). The following information is what I have found when I was preparing. Sharing to hopefully save the reader some time.

Basics

Before we dive in, let’s just make sure we’ve got the basics covered

cdk ls

cdk ls lists all the stacks in the app, including the pipeline.

Example from my test project:

$ cdk ls
Proj1Stack
Proj1Stack/Deploy1/LambdaStack1
Proj1Stack/Deploy2/LambdaStack1
  • Proj1Stack is the pipeline.
  • Deploy1 and Deploy2 are “stages”

cdk synth

cdk synth $STACK_NAME >1.yaml is your friend, a debugging tool. It shows the generated CloudFormation.

cdk.out directory

cdk.out is the directory where cdk synth outputs everything that’s need for deploying (CloudFormation templates, related assets, metadata). They call it Cloud Assembly.

All assets are named based on the hash of their content so they are unique and immutable.

How the Generated Pipeline Looks Like?

When you use an opinionated pipeline, you can see the following generated CodePipeline actions:

  • Source (with long hash as output artifact name)
  • Build with name Synth (a CodeBuild project that runs cdk synth)
  • Build with name SelfMutate (a CodeBuild project that runs cdk deploy to update the pipeline)
  • Build with name FileAsset1 (a CodeBuild project that runs cdk-assets publish). From reading sources: there might be several cdk-assets publish commands configured in the CodeBuild project.
  • Then two CloudFormation deploy actions per each “stage” you want to deploy to (usage of change sets is the default but can be disabled as per documentation, see useChangeSets):
    • CHANGE_SET_REPLACE
    • CHANGE_SET_EXECUTE

cdk-assets

“It will take the assets listed in the manifest, prepare them as required and upload them to the locations indicated in the manifest.”

Note that cdk-assets is not making any decisions; metadata in the cdk.out directory has the information about assets, how to build/transform them and where they go.

cdk-assets can only handle two types of assets:

  • files (including directories). cdk-assets knows how to zip directories and how to upload files and directories to S3.

    (From reading source code) Didn’t see in use but apparently cdk-assets can also run an executable to package a file (or directory?). In this case the content-type of the output is assumed to be application/zip. 🤷‍♂️
  • Docker images. cdk-assets knows how to build Docker images and push them into registry.

Sample command to see the list of assets: npx cdk-assets ls -p cdk.out/Proj1Stack.assets.json

What is Built When?

Files – unprocessed

If the files/directories don’t need any processing, they are just copied over to cdk.out during cdk synth and given a name which is a hash of the contents.

Example: Lambda function code

Files – processed

The processing happens during the cdk synth so that cdk.out contains already processed assets.

Example: Node.JS Lambda function code (processed by tsc (optionally) and esbuild)

Docker Images

cdk-assets builds a docker image and pushes it into the specified repository. The input for the build of the image is a directory in cdk.out which has the Dockerfile and related files.

Deploy

After everything was built and uploaded during cdk synth and cdk-assets, the deploy uses CloudFormation template (templates?) from the cdk.out directory. At this point the assets (which the template references) are in ECR and S3.


  • I tried to condense the information that deemed important.
  • Let me know if something is missing or if you see mistakes.
  • The plan is to update this post as I discover new information.
  • My mistakes so far
    • Started taking notes quite a few hours into the process instead of from the start. Especially it would save me the jumping between the pipeline and the build projects to re-check what each action does.
    • Editing this post tired
  • Last edit: 2023-01-20

Telegraph and the Unix Shell

Following is my opinion of the interactive mode of the Unix Shell. The interactive mode of the Unix shell is not much different from using Telegraph. I’ll substantiate the claim and point out what went wrong.

History

Short history of relevant inventions and events follows.

Telegraph – 1840s

Telegraph is a “point-to-point text messaging system”

Teleprinter – 1887

Teleprinter improved over telegraph’s user interface by adding a keyboard. Essentially it’s still a text messaging system.

“Computers used teleprinters for input and output from the early days of computing.”

Computer Terminal with VDU – 1950s

Computer Terminal with a “video display unit (VDU)” improved over teleprinter by replacing the printer with a screen.

( Summary Till this Point )

We see incremental progress in transmitting text. First between humans and later between a human and computer. The core concept did not change over time. Send text, receive text in response.

VT 52 – 1974/1975

VT 52 is released. It’s a pretty big deal because it supports cursor movement. Text on the screen can therefore be updated. In more general terms, it allowed interaction with what’s on the screen.

Bill Joy – vi – 1976

Bill Joy released vi in 1976. He figured that since cursor movement is supported he can make an editor that actually interactively uses the whole screen. That’s in contrast to the ex editor which didn’t.

What Went Wrong with the Shell?

The shell never caught up with the idea of interactivity on the screen. It’s still a “point-to-point text messaging system”. The paradigm shift never happened. Treating the received text as if it was printed on paper puts the idea of interacting with the output completely out of the realm of possibilities.

The “interactive” shell is not that much interactive. It is only “interactive” compared to “batch”. If you take a 24 lines terminal, considering that all interaction happens on one line (the “command line” I guess), that’s about 4% interactivity by line count.

Practical Consequences

Copy + Paste

Does the following sequence of operations happen to you when working in a shell?

  1. Type and run a command
  2. Start typing a second command
  3. Copy something from the output of the first command
  4. Paste that as an argument to the second command

I know that it does happen to me quite often. Why can’t the shell (tab) complete from the output of the first command? Because the shell has no idea (1) what’s in that output (2) what’s the semantic meaning of what’s in that output and how to use it for completion.

Imagine for a moment a website where you see a list of items but if you want more information about one of them, you need to copy its name and paste somewhere else, adding a word or two. Sounds weird? How is this OK in a shell then?

For the “shell is not supposed to do that” camp: you are welcome to continue to use Nano, Pico and Notepad for programming because “text editor is not supposed to do that”; I’ll use an IDE because it’s more productive.

Each Command on its own

Typically a shell user tries to achieve a goal. For that, typically a series of related commands. The Unix shell doesn’t know or care about that.

Let’s say you issued ls *.txt and you see the file that you were interested in in the output. You can’t interact with it despite being on the screen right in front of you so you proceed. Now you typed cp and pressed tab for completion. The completion will happily try to complete the names of all the files in the directory, not only *.txt which are way more likely to be meant by the user. There isn’t be any relation between sequentially issued commands in the Unix shell. Want composition? OK, use pipe or start scripting.


I might expand this article in the future.

API-less

I was always complaining about how AWS made APIs that are inconsistent in so many dimensions. AWS teams can’t even (read “don’t care”) to agree on how to name pagination related fields.

Because I am such who was that again? an important person, I hope at least someone everybody in AWS have read my blog and they are checking by mistake it at least twice a day for new stupid rants extreme wisdom.

One of these people said “Fook you, Ilya” and made a service without API. Not kidding. Meet AWS Control Tower. No API, no CLI, no CloudFormation. “Take that, Ilya!”

For some balance I will mention that other people in AWS are trying to do some uniform API. Not stellar at the moment but hey, at least they are trying.

Have a nice $(curl https://api.timeofday.example.com/) !

AWS Cloud Control is in for Review

Since 2016 I have worked on AWS library. On 2021-09-30, AWS released something comparable, AWS Cloud Control API. If it’s comparable, let’s compare. I hope the reader will find this unique perspective interesting.

Purpose

AWS Cloud Control provides uniform API to Create, Read, Update, Delete and List different resources. In short, CRUDL.

My library aims to provide uniform simplified declarative interface where the main functionality is to converge to desired state.

Supported Resources

Cloud Control supports much more resources. My library supports very few, the ones that I needed the most for my work. It’s just how experimental library by one person in spare time compares with a product from AWS.

Mode of Operation

Cloud Control

Operates on one resource at a time.

Cloud Control provides relatively low level API, CRUDL. The advantage of Cloud Control over existing APIs is the uniformity, not achieving a desired state. You either create or update, and you need to know which one you are doing. (Deletion is a different story).

I am amazed that somebody at AWS was able to pull this off… where different services don’t agree on naming conventions for tagging resources, how and what can be tagged, and don’t have a convention for naming pagination fields. (Which by the way is nightmare for the users but typically abstracted by Terraform or CloudFormation). Sorry for the rant.

The operations are performed asynchronously. You have an API to query the status: GetResourceRequestStatus. This approach is better for advanced use cases, where multiple resources are being created simultaneously. (Like CloudFormation?)

The operations are idempotent. This is achieved using client token, as in other AWS services.

My Library

Operates either on one resource or a set of resources of the same type at a time. Edit: … from default, specified , or all regions.

The library looks up the resources and either creates or updates them, depending on whether they exist or not.

The operations are performed synchronously. The library throws exception if anything goes wrong. I find this approach much more user friendly for basic use cases.

Idempotency is out of the picture. Should it be in? Probably. Looks like omission on my side. How to best shield the user of the library from this issue? Don’t know yet. Need to think.

Describing Desired State

Cloud Control

The “create” API has desired state parameter: DesiredState.

The “update” API doesn’t, which I find very strange. I thought that desired state is something to be achieved no matter what the current state is. Desired state is only used in the “create” API so from my perspective it could have been called “properties” or whatever.

The “update” API has PatchDocument parameter, which I find not user friendly for the following reasons:

  1. Who actually works with JSON patch format? It’s the first time I see that I need to use it.
  2. I think it is less mentally challenging to describe … desired state and not the delta. This is typically done by IaC tools, including CloudFormation: calculate the diff between current and desired state for the user, so that the user would not need to get into this.
  3. It makes the update inconsistent with create.

My Library

There is no separate create and update. The user specifies the desired state as a parameter to the “converge” function. Converge then either creates or updates the resource / resources. The (non existent) “create” and “update” are therefore completely uniform in the library.

Search / Filtering

Cloud Control

Search is not supported. In practice it means listing all the resources and filtering the result on the client side. Typically that can be done in AWS CLI with the --query flag which is supported globally, for any AWS CLI command. Unfortunately I don’t see a way to make it work in this situation. The returned result has ResourceDescriptions field, an array where each item has Properties field, the Properties field is a string (JSON). Apparently JMESPath does not support parsing JSON in this situation. This means that the output of Cloud Control AWS CLI will be piped to jq or maybe a programming language for filtering and/or further processing.

My Library

While the number of supported resources and filters is low, the library supports filtering. The filtering is done on the server or on the client, completely transparent to the users of the library. What’s done on the server and what’s on the client? Simple – when filtering a given property is supported on the server side, it’s done there.

Desired State Format

Cloud Control

Cloud Control uses CloudFormation syntax. This makes sense.

My Library

My library uses the same format as you would see when using AWS CLI to describe the resource. It allows access to properties that CloudFormation does not have, and is unlikely to have, so for example this works:

  1. instance = AWS::Instance(...).converge(State = 'running') — which creates or turns on the specified EC2 instance / instances. Turning on or off EC2 instances is not supported in CloudFormation.
  2. vpc = AWS::Vpc(IsDefault=true).expect(1) — get a reference to default VPC (to use in further operations).

Closing Thoughts

  1. Cloud Control looks like a step in the right direction.
  2. Having Properties as a string is a major ergonomic issue.
  3. JSON patch for update is a huge ergonomic issue.
  4. Search (filtering in List) functionality is missing.
  5. “Desired state” naming is unjustified.

Cloud Control is a big effort, therefore let’s give the team some slack and see how the API is improved with time? Hopefully soon 🙂


Have a nice day, evening, or night! Mornings are just statistically less nice…

The Web vs Unix

I would like to share my perspective on what’s wrong with Unix. This time by comparing and contrasting it to the Web.

User Agent

With some amount of stretch, one could say that the equivalent of the browser (user agent) would be a shell with a terminal. You interact with it and it does things for you, like the browser.

User Interface

The web browser is capable of rendering textual and graphical content. Unix shell relies on the terminal (usually emulator mimicking decades old hardware) for user interface and in most cases is limited to fixed width font text.

The main difference is how you can interact with the content. In the shell – you can’t. The shell is out of the game. The content that you see on your screen goes from a program and straight into the terminal, bypassing the shell. Compared to a browser, that sounds insane: you are just unable to interact with the content that you see. Ironically enough, this is happening in what’s called “interactive shell“. Some terminals match the text with predefined set of patterns and allow some minimal interaction such as ability to click on a link to open it in a browser.

The browser is a strict superset when we look at interaction capabilities: you can type in and you can interact with the objects on the screen by clicking them. What an amazing concept! Maybe some day shells will be able to that too! Meanwhile, in the shell, you type your commands and get a dump of text back, with rare exceptions.

I would summarize that the shell is a shitty user agent. 💩

I already can hear the coming “shell is not supposed to do it” argument. My opinion: shell is supposed to do whatever is needed for me to be productive. If it’s your “Unix Philosophy” vs me being productive then you can continue to use Notepad (or ed, the standard text editor, for that matter) and I would be using an IDE, OK?

Layout Engine

They also call it browser engine. That’s because on the web it’s in the browser. But where is it on Unix? Everywhere. Yes, the “Make each program do one thing well” is out of the window long ago. Each program does (hopefully) one thing and then it also does the layout of the output.

Each program has the following main options for handling the input/output:

  1. Primitive output – the program dumps some text on standard output. Let’s include colored text here. It’s just some additional color codes. This is equivalent to not having a layout engine. Sample program in this category: grep.
  2. Interactive UI – the program uses ncurses or similar library. It’s relatively small number of programs.
  3. Layout engine – the program contains some form of a layout engine. This is pretty common. Sample programs in this category: ls, ps, top, diff (columns output), wc, …

Common issues with the “layout engines” causing unpleasant broken view in Unix include:

  1. Improper handling of data which is wider than some hard coded fixed value
  2. Improper handling of Unicode
  3. Failure to accommodate for “unexpected” terminal escape codes in the input (which after processing find their way to the output in utilities like sed)

TCP/IP

Pipe

Let’s talk about pipes. Before everybody gets offended and says pipes is the sacred cow best feature of Unix. Yes, it probably is.

Pipes would roughly correspond to the TCP/IP protocols.  Pipes deliver data. For now, let’s leave alone the fact that they are unidirectional as opposed to TCP, which is bidirectional.

Since the web is a stack of protocols, the obvious question would be how other parts of the stack correspond? Read on.

HTTP

HTTP would correspond to text. Well, mostly text. Sometimes null character separated records. Sometimes something else. That’s the standard “format” to communicate between Unix applications.

“Write programs to handle text streams, because that is a universal interface.” – Basics of the Unix Philosophy.

The original claim is that text is the best for interoperability: large number of utilities have text as input and also output text, manipulating it in many different ways in between. Sounds like a dream. Except in reality this dream turned out to be a nightmare.

Incompatible, ad-hoc formats

Text on Unix is not a single format. It’s a bunch of ad-hoc formats, typically incompatible between different programs. That’s why we have a variety of tools such as sed, cut, awk and alikes. Here is my hot take: these tools are not solutions, they are workarounds. When you don’t have a protocol to communicate between applications, you need a bunch of adapters. Like Sisyphus, one need to write these adapters. All the time. Forever. Text parsing and manipulation feels like core part of Unix. From my perspective it’s an accidental complexity.

On a philosophical note: the “universal interface” should have been a stream of bytes or maybe even bits. I guess it was not found to be very useful. Apparently if you add a line separator character it is good enough to become a recommendation. But why stop there? Maybe add more structure? Maybe accommodate the fact that most of the data is either records with named fields or tables with named columns? Are you sure you counted the columns right for your awk '{print $8}' ?

This is in contrast to HTTP which is spoken by everyone on the Web.

Some Hope

Newer CLIs do usually have an option to output JSON or (less prevalent) YAML. They are forming a new ecosystem with different set of tools. From my perspective, it is proving the point that the “universal interface” might not be that universal and not as productive as envisioned (should I dare and say “unacceptable”?) .

Should it be the half-way structured, aka semi-structured JSON? Is it the sweet spot? I mean why stop here? Maybe we need something with schema? Let me know what you think.

One of the notable projects, jc, is an adapter between the “universal interface” and something that you would actually like to work with.

Shameless Plug

If only we could have a shell that could play a role in this new ecosystem… or maybe even push it a bit in the direction of having semantically meaningful objects on the screen so that interaction would be possible…

Yes, I am aware of other projects solving the same issue. While we mostly agree on the problem, I haven’t yet seen a project which sees the solution the same way as I do.


Happy, DevOps-ing!

“Use Dumb Shell, don’t Reinvent the Wheel”

Opening Rant

You don’t hear one developer saying “Just use Notepad” to a colleague with argumentation that goes roughly like this:

Why are you using this horrible Visual Studio Code? It has built-in debugger! No!

JetBrains IDEs? No! They do too much! They are so into the code!

Vim? Emacs? Not pure enough! Who needs that stupid syntax highlighting?

Keep text editing pure! Any semantic understanding by the text editor is undesirable, other programs should handle that. You don’t want to complicate the text editor.

Developers are not saying that because user experience and productivity matter. Yet, “Use Dumb Shell” is considered to be an acceptable opinion. Is that so common that people fall on their heads so hard (alternatively, did not give it any thought)? WTF?

The solution (shell) should be as simple as possible but not simpler than possible. Current shells are simpler than required by good user experience. Wrong trade-off. Keeping something simple is important but not more important than the outcomes.

Source: https://www.flickr.com/photos/toddle_email_newsletters/15413603567/
Image is a link to http://www.workcompass.com/

Additional food for thought:

  1. Why use a car when bicycle is so much simpler?
  2. Why use electricity when fire is so much simpler?
  3. Why have water in your house when a wells are so much simpler?

Background

I was doing consulting. The usual suspects: AWS, bash, Python, Puppet, Chef. Got to Terraform later. I had and I am still having subpar experiences with these tools. Anything I wanted to do, was overly burdensome, complicated and full of pitfalls.

Since I can’t attempt to fix everything, I picked the worst offender and started working on the alternative programming language and shell combo. The motivating opinion is that Ops have no good programming language nor adequate shell.

The absence of good programming language for ops was covered in another post. In this post I will cover some of the things that are wrong with the interactive shell.

The Shell

The dominant player is bash. It didn’t change much for decades: you type commands and get a dump of text on your screen. Most of the alternatives are essentially the same in this regard, for decades.

Is this because of the brilliant design? I would ask: which design? This? Quoting:

I wrote quite complex shell scripts and my first suggestion is “don’t”. The reason is that is fairly easy to make a small mistake that hinders your script, or even make it dangerous.

The “Dumb Shell” Approach

In this post I would like to address common thought that I hear from people regarding Next Generation Shell, a new programming language and a shell that I’m working on. Note that other shells which are more advanced than POSIX shells also get this. Quoting @cup from lobste.rs:

Wouldnt it be better to just have a dumb shell, that can call programs to do heavy lifting (read: programming languages). This way you have a “division of labor”. Shell works best for launching executables, and programming languages work best for handling data structures and algorithms.

No, it would not. I refuse to accept under-powered tools.

Dumbness is Fundamental Flaw

The “dumb shell” has no semantic understanding and doesn’t care about programs’ inputs nor outputs. Let’s see how it plays out.

Today, “Understanding” of programs’ inputs is covered by completion. Completion was added because “dumb shell” had horrible user experience. It’s slightly better now when the shell “understands” programs’ input to some degree. To some people completion is a scope creep. I think of it as better user experience and productivity gain.

“Understanding” of programs’ outputs? We are not there yet. It also seems that interacting with objects on the screen is too novel of an idea for the shell. Considering how much time this idea is out there: WTF?

Let’s see how this “dumbness” manifests as bad user experience even at the very basic, “intended” functionality:

Programs’ Output – Size

Do you know of any real world scenario when a human supposed to go over 10K lines on the screen? I mean just sit there and read it. Let me know. I’ve never seen such use case.

The shell is dumb, the shell “does not intervene” in programs’ outputs. Sounds good until you get unlimited number of lines dumped on your screen.

“Should have used less” you think later. Right. What if you forgot? The buffer is now filled with useless output and you can’t see outputs of previous programs. Are you being punished? No, just nobody cared about the UX. Alternatively, “it would be to complicated to implement”.

Programs’ Output – All Mixed

  1. Want to know what’s on your screen is stdout and what is stderr? Well… you can’t. Your shell is dumb, it doesn’t deal with things like that.
  2. Want to know from which program the output came from? Nope. Some programs cope with that to some degree by prepending their name to error messages: ls xxx gives you ls: xxx: No such file or directory. What a wonderful strategy! Keep the shell dumb and push the burden to all the programs.
  3. You can’t type because some background job is continuing to dump text on the screen where you are trying to work? Too bad, should have used redirection because guess what … you shell doesn’t handle that either… and you can’t add redirection after the program is running; again not shell’s business.

Programs’ Output – Semantic Understanding

You just typed aws ec2 describe instances --filters ... and now you have some output.

You now see on your screen instance you would like to stop. The ID of the instance is right in front of your face. Now you type aws ec2 stop-instances --instance-ids. You would like to append the instance ID that you see on the screen. Nope. Your shell doesn’t do that. Too dumb. Select with the mouse and paste, because f*ck you!

Side note: amazing AWS engineers did not include any human readable output format so you get JSON dumped on your screen (or any other format which is still non-human-compatible).

Let’s imagine for a moment that the command output had some semantic meaning to the shell.

  1. The shell would display the output as a table.
  2. The table would be interactive (interactive output, what a heresy!) and one could navigate with arrow keys and have a shortcut for copy/paste the current cell value to the command line (for completion).
  3. You could interact with the objects in the table with the mouse (very new concept, another heresy for the shell).
  4. How about instead of typing aws ec2 stop-instances --instance-ids you navigate to the correct line, press enter, choose “stop” from the menu and the command is constructed for you? aws ec2 stop-instances --instance-ids i-123... amazing, ha? Well, your shell can’t do that.

Meaning, do you speak it mo***er?

How about after performing operations using the UI you would get as per your choice one of the below snippets which would re-create the operation:

  1. CLI commands
  2. CloudFormation tempalte
  3. Terraform “code”

Solution: UI for the Shell

Suppose I agree for a second, what do you suggest?

https://github.com/ngs-lang/ngs/wiki/UI-Design

I personally don’t see how the described features could be implemented as external programs, keeping the shell “dumb”.

We Can Do Better Today

The reality has changed. What was once amazing is subpar by today’s standards. The world outside of the shell moved forward while the shell stayed almost the same. Brilliant design? Brilliant what?

Let’s move this industry together from the stone age of bash shell to the bronze age of something a bit less subpar – Next Generation Shell.

Closing Rant

Imaginary UNIX people:

We wanted to separate things because they are semantically different so we split the things into stdout and stderr. Well… stderr was is actually for everything that is not stdout.

One bit of metadata (stdout vs stderr) for semantic meaning of the output should be enough for everyone forever. Well… at least it’s simple for us to implement.


Update: discussion on lobste.rs

AWS CloudFormation became a programming language

… kind of.

Declarative has its advantages which are hyped all over the internet so I’ll skip that part. The painful downside of declarative approach is often the expressivity. Sample proofs:

Now you can have Python embedded in your CloudFormation file. That is part of the CloudFormation Macros which were introduced on 2018-09-06.


Happy coding, everyone!

 

Bash pitfall: if test, if [, if [[

I bet you’ve seen a lot of scripts with seemingly innocent if [ -e blah ];then ...; else ...; fi or something similar . What’s the problem? if has at most two branches while test , [ and [[ have three different exit codes. Oops.

If you make a syntax error (or any other error occurs) in the test , [ or [[ expression, it will return the exit code 2 (or above, according to man test​). if will take the else branch. If you are lucky, you will notice the error message from the test, [ or [[ commands. If not, the else branch will always be executed.

I don’t want to use bash. The pitfall above is one of the many reasons. Unfortunately, I do use bash because it’s still best tool for some tasks. I’m working on alternative to bash. It’s called NGS, the Next Generation Shell. In NGS, the situation above is solved as one would expect from a modern programming language: exit codes 2 and above throw exception.

If you also think that there should be a viable alternative to bash, you are welcome to help me working on it.

Update 2021-12-31: real world example is at https://github.com/awslabs/aws-lambda-cpp/issues/140


Happy coding! Hope it’s not in bash 🙂

Bezeq International “protection”

Hello!

I’ve got “protection” feature by default (and I didn’t notice I even had it up until now) from my internet provider, Bezeq International. In the last few days I was experiencing selective reachability. Some IPs were just blocked by the “protection”.

More than 20 minutes with support that wanted to install their binaries on my laptop (I couldn’t do it for many reasons) and then about 5 minutes with some more senior guy that after hearing the symptoms just turned that thing off. Everything works fine now.

Hope this helps other people so they could recognize the situation and immediately know what’s happening.

Details follow:

  • One of GitHub web IPs was blocked.
  • Broken FaceBook
    ;; ANSWER SECTION:
    static.xx.fbcdn.net. 3599 IN CNAME scontent.xx.fbcdn.net.
    scontent.xx.fbcdn.net. 59 IN A 157.240.1.23
  • Broken AWS. Manifested in timeouts talking to various services endpoints.

Following are just screenshots of http://ec2-reachability.amazonaws.com/ :

 

Screen Shot 2018-07-24 at 9.09.34 AMScreen Shot 2018-07-24 at 9.09.42 AMScreen Shot 2018-07-24 at 9.09.50 AMScreen Shot 2018-07-24 at 9.09.58 AMScreen Shot 2018-07-24 at 9.10.07 AMScreen Shot 2018-07-24 at 9.10.16 AMScreen Shot 2018-07-24 at 9.10.25 AMScreen Shot 2018-07-24 at 9.10.33 AMScreen Shot 2018-07-24 at 9.10.40 AMScreen Shot 2018-07-24 at 9.10.46 AM

Terraform 0.12 language looks bad

I was hoping that smart guys vs bad situation will have another outcome but Terraform language for version 0.12 looks bad… as languages of Puppet and Ansible.

I’m not saying that people that made Puppet and Ansible are not smart. It’s that we could learn from the mistakes they made… unless we don’t consider those being mistakes.

Puppet and Ansible went through very similar difficult situation. They have limited themselves to a declarative format and then they tried to accommodate the real life. Terraform has this situation right now.

The situation is:

  • Declarative format being used
  • People need something more powerful, like a programming language because … real life where conditionals, loops and data transformations make much more sense than working around declarative languages limitations.

Interestingly enough, they all did not switch to a proper programming language. Maybe because that would be at least partially admitting that the product should have been a library in the first place?

Terraform is actually in very crappy situation because even if they decide to expose everything as a library as the main interface, I don’t see people start using Go for “infrastructure as code”. Not as smooth as Ruby or Python anyway.

Happy coding, everyone!

Update (2018-07-21):

On a bit more positive note, the new splat operator looks like an improvement.

Update (2018-07-27):

Terraform looks even more like a “normal” language with Conditional Operator Improvements and null value. The conditional operator fixes previous oddities that it had.

Update (2018-08-02):

Terraform got type system. Looks powerful. Just need to see that Terraform does not evolve to Scala 🙂

Update (2018-08-11):

New template syntax brings more raw power. Looks good.

Update (2018-08-26):

  • HCL to JSON one-to-one mapping. When I read “having a clean 1:1 mapping between HCL and JSON, and ensuring every feature of HCL is supported in JSON” I immediately thought that there must be converting tools then… and was not disappointed 🙂 “In future versions of Terraform, we will also support native tooling to convert HCL to JSON and JSON to HCL cleanly (including comments)”
  • “Comments in JSON” – nice!