Bezeq International “protection”

Hello!

I’ve got “protection” feature by default (and I didn’t notice I even had it up until now) from my internet provider, Bezeq International. In the last few days I was experiencing selective reachability. Some IPs were just blocked by the “protection”.

More than 20 minutes with support that wanted to install their binaries on my laptop (I couldn’t do it for many reasons) and then about 5 minutes with some more senior guy that after hearing the symptoms just turned that thing off. Everything works fine now.

Hope this helps other people so they could recognize the situation and immediately know what’s happening.

Details follow:

  • One of GitHub web IPs was blocked.
  • Broken FaceBook
    ;; ANSWER SECTION:
    static.xx.fbcdn.net. 3599 IN CNAME scontent.xx.fbcdn.net.
    scontent.xx.fbcdn.net. 59 IN A 157.240.1.23
  • Broken AWS. Manifested in timeouts talking to various services endpoints.

Following are just screenshots of http://ec2-reachability.amazonaws.com/ :

 

Screen Shot 2018-07-24 at 9.09.34 AMScreen Shot 2018-07-24 at 9.09.42 AMScreen Shot 2018-07-24 at 9.09.50 AMScreen Shot 2018-07-24 at 9.09.58 AMScreen Shot 2018-07-24 at 9.10.07 AMScreen Shot 2018-07-24 at 9.10.16 AMScreen Shot 2018-07-24 at 9.10.25 AMScreen Shot 2018-07-24 at 9.10.33 AMScreen Shot 2018-07-24 at 9.10.40 AMScreen Shot 2018-07-24 at 9.10.46 AM

Terraform 0.12 language looks bad

I was hoping that smart guys vs bad situation will have another outcome but Terraform language for version 0.12 looks bad… as languages of Puppet and Ansible.

I’m not saying that people that made Puppet and Ansible are not smart. It’s that we could learn from the mistakes they made… unless we don’t consider those being mistakes.

Puppet and Ansible went through very similar difficult situation. They have limited themselves to a declarative format and then they tried to accommodate the real life. Terraform has this situation right now.

The situation is:

  • Declarative format being used
  • People need something more powerful, like a programming language because … real life where conditionals, loops and data transformations make much more sense than working around declarative languages limitations.

Interestingly enough, they all did not switch to a proper programming language. Maybe because that would be at least partially admitting that the product should have been a library in the first place?

Terraform is actually in very crappy situation because even if they decide to expose everything as a library as the main interface, I don’t see people start using Go for “infrastructure as code”. Not as smooth as Ruby or Python anyway.

Happy coding, everyone!

Update (2018-07-21):

On a bit more positive note, the new splat operator looks like an improvement.

Update (2018-07-27):

Terraform looks even more like a “normal” language with Conditional Operator Improvements and null value. The conditional operator fixes previous oddities that it had.

Update (2018-08-02):

Terraform got type system. Looks powerful. Just need to see that Terraform does not evolve to Scala 🙂

Update (2018-08-11):

New template syntax brings more raw power. Looks good.

 

Terraform becomes a programming language

Declarative languages failure

Approach that in my eyes failed, again and again, is to start with your own declarative language and then with time grow the language. (SQL being among notable exceptions)

Puppet is the best example. map and each, added in Puppet 4.0.0 are, in my opinion, just two in a sea of evidence that the envisioned simple format has failed to handle the needs of the real world.

Ansible’s loop looks bad as the whole idea of making top levels of programs in YAML based syntax (and the rest in Python).

In my opinion, it makes more sense to create a language first and then libraries for it, not a library and then a language around it.

My hope for Terraform

I think Terraform guys are smart. Among other things, it manifests in implementing data sources. Data sources make Terraform much more flexible. I think it’s very clever.

Terraform, which started declarative, are now inventing their own programming language. They are going the way of Puppet and Ansible. I hope they can do better, in this awkward situation: there are quite a lot of constraints on the programming language because of the existing syntax and semantics.

Happy coding, everyone!

 

List of JSON tools for command line

I am considering making a JSON parsing and generating command line tool. Started with looking around a bit. Below is a list of existing JSON command line tools. Numbers are [GitHub stars] at the time of writing this post. (… contributed by …) means that this post was updated with the item.

  • jq [11126] – filter, extract, modify and output JSON or text using DSL
  • jid [4426] – “You can drill down JSON interactively by using filtering queries like jq.” (item contributed by /u/Tacticus)
  • gron [4103] – convert JSON or JSON lines (from file/stdin/url) to text (path=value) which can be processed with grep/sed/diff; the tool also supports converting back to JSON after such processing
  • jo [2209] – generate JSON based on command line arguments and stdin; can read data from files and place it as base64 encoded values
  • JSON.sh [1635] – written in shell/gawk; “traverses the JSON objects and prints out the path to the current object (as a JSON array) and then the object, without whitespace”
  • jsawk [1239] – focused primarily on filtering and transforming a list (or an object)
  • json (by trentm) [1218] – “massaging JSON on your Unix command line”; JS-like syntax for extracting values; in-place file editing
  • rq [1007] – awk/sed-like tool for structured data; supports several formats, including JSON
  • TickTick [469] – use JSON syntax directly in bash; “This is just a fun hack”
  • jshon [309] – very CLI-ish way to extract, manipulate and output the data
  • jl [308] – “a tiny functional language for querying and manipulating JSON”; visually reminds Haskell
  • jsonpp [244] – JSON pretty printer (item contributed by /u/ferbass)
  • fx [227] – conveniently run your JS code to manipulate JSON.
  • RecordStream [224] – create, manipulate and output records; supports JSON; Perl-based so grep expressions for example are in Perl.
  • JSON.awk [186] – JSON.sh fork in awk; after fork the projects added different features.
  • jp [184] – “command line interface to JMESPath” (link contributed by Evgeny Zislis)
  • json-command [143] – conveniently manipulate JSON using JS.
  • jsonv.sh [130] – convert JSON to CSV; specify paths in JSON to
  • jgrep (aka “JSON-grep”) [78] – “Command line tool and API for parsing JSON documents” in Ruby (item contributed by /u/tophlammiepie)
  • jsed [48] – manipulate and extract data; somewhat similar to jsawk in mindset
  • jsongrep [9] (by dsc) – extract data at given path using shell globs and output one per line
  • jsongrep [0] (by terrycojones) – easily extract data at given path

Honorable mentions

 


If you feel that some project is missing from the list, please let me know in comments below.

No, not everybody uses X

How likely are you to give the wrong answer when everybody in the group gives the wrong answer? More likely than without the group. That’s what Asch conformity experiments have proven.

Marketing people know that. So when you get the impression  that “everybody uses X”, please be aware that it can be intentional and maybe does not match the reality. It can be just a trick. Bloggers for example, have incentives to write about X (consultants that can make money if you adopt X).

analytics-3291738_640
Makers of X must continue to grow no matter what

I don’t want to accuse any specific firm or product in this post but I suspect the “coolest”, the most advertised and the most pushed down our throats products. There is no guarantee makers of X are interested in your success. They are surely interested in their own growth and success.

Hope this post increases your chances to survive next marketing attack just by being aware of yet another deceitful marketing technique.

 

How fucked is AWS CLI/API

In the 21st century, we can do significantly better than this crap AWS CLI. We just need to think from the users’ perspective for a change and not just dump on users whatever we were using internally or seen in a nightmare and then implemented.

thinking-3082823_640
Think from the users’ perspective

I’m working on a solution which is described towards the end of the post. It’s not ready yet but the (working btw) examples will convince the reader that “significantly better” is very possible… I hope.

Background

I’m building AWS library and a command line tool. Since I’m doing it for a shell-like language (NGS), the AWS library that I’m building uses AWS CLI. Frustration and anger are prominent feelings when working with AWS CLI. I will just list here some of the reasons behind that feeling.

furious-2514031_640
AWS CLI/API – WTF were they thinking?

Overall impression

Separate teams worked on different services and were not talking to each other. Then the something like this happened: “OK, we have these APIs here, let’s expose them. User experience? No, we don’t have time for that / we don’t care”.

Tags

Representing a map (key-value pairs) as an array ( "Tags": [ { "Key": "some tag name", "Value": "some tag value" }, ... ] ), as AWS CLI does is insane… but only when you think about the user (developer in this case) experience.

shocked-2681488_640
AWS Tags – no, not in this form!

Reservations

When listing EC2 instances using aws ec2 describe-instances, the data is organized as list of Reservations and each Reservation has list of instances. I’m using AWS for quite a few years and I never needed to do anything with Reservations but I did spent some time unwrapping again and again the interesting data (list of instances) from the unwanted layer of madness. It really feels like “OK, Internally we have Reservations and that’s how our API works, let’s just expose it as it is”.

annoyed-3126442_640
Who da fuck ever used Reservations?

Results data structure

AWS CLI usually (of course not always!) returns a map/hash at the top level of the result. The most interesting data would be called for example LoadBalancerDescriptions, or Vpcs, or Subnets, or … ensuring difficulty making generic tooling around AWS CLI. Thanks! Do you even use your own AWS CLI, Amazon?

Inconsistencies

beer-2370783_640
Kind of the same but not really

Security Groups

These are of course special. Think of aws ec2 create-security-group ...

  1. --group-name and --description are mandatory command line arguments. For now, I haven’t seen any other resource creation that requires both name and description.
  2. The command returns something like { "GroupId": "sg-903004f8" } which is unlike many other commands which return a hash with the properties of the newly created resource, not just the ID.

Elastic Load Balancer

Oh, I like this one. It’s unlike any other.

  1. The unique key by which you find a load balance is name, unlike other resources which use ids.
  2. Tags on load balancers work differently. When you list load balancers, you don’t get the tags with the list like you have when listing instances, subnets, vpcs, etc. You need to issue additional command: aws elb describe-tags --load-balancer-names ...
  3. aws elb describe-load-balancers – items in the returned list have field VPCId while other places usually name it VpcId .

Target groups

aws elbv2 deregister-targets --target-group-arn ...

Yes, this particular command and it’s “target group” friends use ARN, not a name (as ELB) and not an ID (like most EC2 resources).

DHCP options (sets)

(Haven’t used but considered so looked at documentation and here is what I found)

Example: aws ec2 create-dhcp-options --dhcp-configuration "Key=domain-name-servers,Values=10.2.5.1,10.2.5.2"

Yes, the create syntax is unlike other commands and looks like filter syntax. Instead of say --name-servers ns1 ns2 ... switch you have "Key=domain-name-servers,Values=" WTF?

Route 53

aws route53 list-hosted-zones does not return the tags. Here is an output of the command:

{
 "HostedZones": [
 {
 "Id": "/hostedzone/Z2V8OM9UJRMOVJ",
 "Name": "test1.com.",
 "CallerReference": "test1.com",
 "Config": {
 "PrivateZone": false
 },
 "ResourceRecordSetCount": 2
 },
 {
 "Id": "/hostedzone/Z3BM21F7GYXS7Y",
 "Name": "test2.com.",
 "CallerReference": "test2.com",
 "Config": {
 "PrivateZone": false
 },
 "ResourceRecordSetCount": 2
 }
 ]
}

Wanna get the tags? F*ck you! Here is the command: aws route53 list-tags-for-resources --resource-type hostedzone --resource-ids Z2V8OM9UJRMOVJ Z3BM21F7GYXS7Y . Get it? You are supposed to get the Id field from the list generated by list-hosted-zones, split it by slash and then use the last part as resource ids. Tagging zones also uses the rightmost part of id: aws route53 change-tags-for-resource --resource-type hostedzone --resource-id Z2V8OM9UJRMOVJ --add-tags Key=t1,Value=v11

… but that apparently was not enough differentiation 🙂 Check this out: aws elb add-tags --load-balancer-names NAME --tags TAGS vs aws route53 change-tags-for-resource --resource-type hostedzone --resource-id ID --add-tags TAGS and aws elb remove-tags --load-balancer-names NAME --tags TAGS vs aws route53 change-tags-for-resource --resource-type hostedzone --resource-id ID --remove-tag-keys TAGS . Trick question: on how many dimensions this is different? So wrong on so many levels 🙂

Here is another one: when you fetch records from a zone you use the full id, /hostedzone/Z2V8OM9UJRMOVJ , not Z2V8OM9UJRMOVJaws route53 list-resource-record-sets --hosted-zone-id /hostedzone/Z2V8OM9UJRMOVJ

 

(many more to come)

Expected comments and answers

feedback-2044700_640
Internet discussions are the best 😉

Just use Terraform

I might some day. For the cases I have in mind, for now I prefer a tool that:

  1. Is conceptually simpler (improved scripting, not something completely different)
  2. Doesn’t require a state file (I don’t want to explain to a client with two servers where the state file is and what it does)
  3. Can be easily used for ad-hoc listing and manipulation of the resources, even when these resources were not created/managed by the tool.
  4. Can stop and start instances (impossible with Terraform last time I checked a few months ago)

Just use CloudFormation

I don’t find it convenient. Also, see Terraform points above.

By the way, the phrases of the form “Just use X” are not appreciated.

You just shit on what other people do to push your tools

Yes, I reserve the right to shit on things. Here I mean specifically AWS CLI. Freedom of speech, you know… especially when:

  1. The product (AWS API) is technically subpar
  2. The creator of the product does not lack resources
  3. The product is used widely, wasting huge amounts of time of the users

If I’m investing my time to write my own tool purely out of frustration, I will definitely tell why I think the product is crap.

Is that just a rant or you have alternatives?

I’m working on it. The command line tool is called na (Ngs Aws). It’s a thin wrapper around declarative primitives library for AWS.

excited-3126450_640
There might be a solution to this madness!!!

The command line tool that I’m working on is not anywhere ready but here is a gist of what you can do with it:

# list vpcs
na vpc

# Find the default VPC
na vpc IsDefault:true

# List security groups of all vpcs which have tag "Name" "v1".
na vpc Name=v1 sg

# Idempotent operations, in this case deletion.
# No error when this group does not exist.
# Delete "test-group-name" in the given vpc(s).
na vpc Name=v1 sg GroupName:test-group-name DEL

# List all volumes which are attached to stopped instances
na i State:stopped vol

# Delete all volumes that are not attached to instances
na vol State:available DEL

# Stop all instances which are tagged as
# "role" "ocsp-proxy" and "env" "dev".
na i role=ocsp-proxy env=dev SET State:stopped
  1. Na never dumps huge amounts of data to your terminal. As a human, you will probably not be able to process it so when number of items (rows in a table actually) is above certain configurable threshold, you will see something like this:
    $ na vol 
    === Digest of 78 rows ===
    ...

    It will show how many unique values there are in each column, min and max value for each column. Thinking about displaying top 5 (or so) top and bottom values for each column.

  2. Na has concept of “related” resources so when you write something like na i State:stopped vol , it knows that the volumes you want to show are related to the instances that you mentioned. In this particular case, it means volumes attached to the instances.
  3. Note the consistency of what you see in output and arguments to CLI. If something is called “State” in the data structure returned by AWS CLI, it will be called “State” in the query, not “state” (“–state”).

I will be updating this post as things come along.