Functional conference, 2015

Last year, I had attended the rather useful, first edition of the, Functional conference, and returned this year too.

I walked in when the keynote, by Amit Rathore, was in progress. The talk seemed to veer in the direction of “functional-programming-awesome, functional-programmers are ninjas, or some equivalent fighter hero types”. I am not big fan of that sort of messaging; it does not appeal to me. But it was interesting to note that Amit’s company continued to build on the advantages that functional languages provided, and were able to keep the organization nimble and productive, considering their mission to disrupt the media industry. Continue reading “Functional conference, 2015”

Advertisements

KDB+, Bangalore meetup, and more

Noticed a “kx community” appear on meetup, a few months ago, and decided to join that one. I was always intrigued by the language K, J, and subsequently Q. The accompanying database kdb+ is an interesting one too. This combination of languages and database has worked well in the time-series analysis domain. These product from Kx Systems – the company that provide K, Q, kdb+ – has consistently appeared on the STAC benchmarks for years now.

The terseness of the K, and Q languages and their performance capabilities were attractive propositions, to me. I didn’t have a sufficiently rich time-series analysis problem, to utilize the power of these languages.

Serendipitously, this meetup came along, and also a talk by a pharma company on their experience with using kdb+. I promptly decided to attend this talk, and get to know more about this kdb world.

Meetup

The meetup was a good one – a room of 20plus participants, and the talk by Purdue Pharma turned out to be rather interesting too. They saw a drastic reduction in the infrastructure and people costs associated with time-series analysis, once they moved to kdb – less hardware, and reduced the team size to a couple of people (from a dozen or so previously). This was accompanied by a runtime performance boost of a couple of orders of magnitude. This seemed too good to be true – 100 times improvement in performance for a 5 times reduction in cost.

The rest of the talk focused on demonstrating this performance gain, and also how they went about integrating web technologies with kdb. Charting results using echarts (they had explored highcharts earlier, but settled on echarts because of its “efficiency & performance”. Something I heard, and am paraphrasing. Not something I have confirmed).

The kdb connectivity option with Javascript caught my attention, and it would be nice to run d3js against some of the kdb data (example of angular js with kdb).

kdb, in India

  • I suggested creating a MOOC / Coursera course, to make this knowledge more accessible (yes, the kx community exists and does a good job, but then most college graduates and people who want to learn usually would like some form of a MOOC – this has been my experience so far)
  • Talk or conduct a workshop at the Functional Conference 2015, in Bangalore. This conference is during Sep 10—13, 2015
  • More meetup talks, on kdb experiences, and also some hands-on sessions with the technology.

 

PostgreSQL: json, jsonb support

I follow, and use the pydal, and web2py projects quite closely. I like the way these projects are engineered, and the community around them.

One of the discussion points that came up was around the wonderful support for JSON, and JSONB in PostgreSQL. As part of that discussion, I started jotting down some points on the differences between the vanilla JSON, and PostgreSQL supported JSONB. These differences are described in the PostgreSQL documentation. I included the “implications” column, below. Continue reading “PostgreSQL: json, jsonb support”

Functional Conference, Bangalore

Thanks to Twitter, I came across an announcement for a Functional Conference in Bangalore. I usually do not attend conferences, but then a conference focused on functional programming and ideas was a novelty in India. Quickly took a look at the people behind this conference, liked what I saw, and booked a place for myself. This was back in June 2014.

Coming to the conference itself, the registration process was a breeze, and the conference venue was a rather good hotel, with decent conference room facilities. Most of the talks listed at the conference were rather interesting. I attended the following sessions:

  • Functional Reactive UIs with Elm – was curious about Elm, and the presentation content was interesting enough, though the presentation of the content was not engaging.
  • Applying functional programming principles to large scale data processing – application of “lambda architecture” for data processing. Would have liked to see more depth/details in this session. In any case, it introduced me to the phrase “lambda architecture” – I was using the “functional-style-of-architecture” in my design discussions earlier.
  • Compile your own cloud with Mirage OS v2.0 – creating a “unikernel”; where the OS is treated as a library, and is statically linked to the user application; and all of this runs as a single binary. Wow! I wanted to attend this session because OCaml was mentioned in its abstract, and this turned out to be a session that got me thinking about possibilities, and something I continue to think about. This was a session that gave me a lot of vocabulary and ideas for what I was once proposing within IBM – the concept of “Lean middleware” – the idea being to reduce indirections, and to use the OS as a library.
  • Property based testing for functional domain models – I have been following Debasish on twitter, and reading the lucidly written entries on his blog. This session was presented rather well, and I got to learn about “property based testing”, and “dependently typed languages – Idris“. Something I want to use, the next opportunity that I get.
  • Code Jugalbandi – this was an interesting experiment; a quick introduction to the idioms in different programming languages – Scala, Clojure, Groovy.
  • Learning (from) Haskell – An experience report – liked the way this was presented, and learnings that were applied to improve code quality; reinforcing best-practices / idioms in a language of choice – Python / Ruby, etc
  • Pragmatic Functional Programming using Dyalog – It just so happened that I had installed Dyalog, and played with APL a couple of months before this session. So, the APL terseness and unique syntax was something that I was aware of. This session was one that made me re-think how important concise, yet readable, code can be. The demo where Morten – the presenter – scraped Wikipedia content to create a FOAF network was really interesting.
  • Monads you already use (without knowing it) – an introduction to Monads.
  • Purely functional data structures demystified –  a very good introduction to a rather dense topic, and inspired me enough to look up the work that Okasaki has done in this area.
  • An introduction to Continuation Passing Style (CPS) – the topic started off on a rather simple note, and quickly developed into something that made me sit at the edge of my seat – intellectually stimulating topic.
  • Keynotes – I liked the engaging keynote by Daniel, where my key take away was “approach new topics with an open-mind, and treat people with kindness”.

Overall, the Mirage OS session, Property -based-testing sessions were the ones that engaged me the most, and ones which got me thinking about possibilities.

The conference was a well organized one, and one that I would attend the next time around too.

Hearing a Dead language

The other day I was having a conversation with one of the interns who works with me, and he mentioned that he is interested in speech-to-text technologies, and that he is looking to work on projects in that space.. Following that a couple of days later I suggested a project which may be interesting

Hearing a dead language

  • We can convert speech to text
  • We can then record all aspects of that speech – about the speaker, geographic origin of the speaker, etc. Am guessing the way people enunciate / pronounce / emphasize sounds and words is dependent on the kind of environment they are in – languages spoken in the desert areas may sound different from the ones spoken in the rainforest.
  • We then identify patterns, based on speech samples, which correlate geo-origin with speech patterns and sounds.
  • We then figure out, with some degree of certainty, what a language may sound like if spoken, based on its text representation and its geo-origin.

I know that this approach is simplistic, and there are quite a few “dots that need to be connected”,  but then this recent article in the NatGeo seemed encouraging: “Does Geography Influence How a Language Sounds?“. Time to figure out, via a web search,  if some of these dots are connected.

 

Update 19 June 2013: Came across this interesting article on “India becoming a graveyard for languages

Update 25 June 2013: “Audio Recordings of human languages

Update 26 June 2013: “Preserving endangered languages before they disappear

Update 03 July 2013: “Save a Language, Save a culture“; “Vanishing languages…

OpenCL

The last few weeks I have been looking at the map-reduce area; specifically Hadoop and IBM’s BigInsight. This led to questioning how the data-parallel activities could be sped up. Found a reference to the usage of GPU for such data parallel tasks. A quick yahoo search kicked off the following site navigation:

The promise of a GPU powered speed-up in processing is a rather seductive one. There are ofcourse a bunch of disadvantages going with the GPU (refer to one of the Khronos, or MacResearch links for details).

This is an area I will be looking at in some detail going forward.

XSLT based mapper for WebSphere sMash

xslt mapper in websphere smashSometime ago, my team decided to build a WebSphere sMash based data integration application – and as part of that we decided to enhance the existing data-mapper user interface that exists in sMash. Basically, the idea was to provide a mapper UI, which displays the input and output data structures, and allows the user to map the attributes, and also apply transformation rules on those mappings. Makes the whole data mapping / transformation experience a little more intuitive.

Some experiments later, we posted a forum topic, and the sMash development team like the idea.

With the latest release of sMash this support is available for all sMash developers and users. Karthik, and Thiru did a great job of making this happen.