Posted by & filed under Business, communication, execution, innovation, leading teams, management, managing people.

By Theodore Kinni

Theodore Kinni has written, ghosted, or edited more than 20 business books. He was book review editor for strategy+business for 7 years.

Companies spend a lot of money on innovation. According to an annual study by PwC Strategy, the top 1000 corporate R&D spenders invested $647 billion in their quest for innovation in 2014. That’s more than the individual GDPs of all but the 30 most prosperous countries in the world.

Given this level of spending, I assumed that these companies—the so-called Global Innovation 1000—were getting a pretty hefty return on their investment. But I was wrong about that: Year after year, Strategy& reports that there is “no statistically significant relationship between sustained financial performance and R&D spending” among these enterprises.

That didn’t make a lot of sense to me, until I realized that the lack of correlation probably wasn’t between innovation spending and corporate success as much as it was between innovation spending and innovation success. Unless your R&D spending actually generates some kind of commercially viable innovation, it’s not going to translate into financial performance, is it?

Read more »

Posted by & filed under learning, Learning & Development.

I tend to spend all of my focused learning time on understanding new technology. My approach is to bang my head against a side-project—while reviewing documentation—until I get it working. I didn’t use this approach when I recently decided to review math fundamentals, and as a result found myself unable to gain momentum, falling asleep with chapters of different math books open, never making much progress.

Read more »

Posted by & filed under Amazon, Business, Content - Highlights and Reviews, culture, leadership, management, strategy.

By Theodore Kinni

Theodore Kinni has written, ghosted, or edited more than 20 business books. He was book review editor for strategy+business for 7 years.

I’ve been waiting for Amazon—with its annual sales of almost $90 billion in 2014—to crash and burn for a long time. There was no way that a public company could continue to operate almost entirely without profit year after year—Amazon lost $241 million in 2014. I was positive that a reckoning was just around the corner. Now, 20 years down the road, and before Jeff Bezos dispatches a fleet of delivery drones to bombard me with the company’s ubiquitous shipping cartons, I hereby publicly and unconditionally surrender. Never again will I mutter—even under my breath—about the company’s prospects.

No matter what you think of Amazon, it is clear that it is a juggernaut of a company—and that its leaders play a big role in its ability to generate topline growth. That’s why it’s worth reading The Amazon Way: 14 Leadership Principles Behind the World’s Most Disruptive Company by John Rossman. Rossman, who was formerly Amazon’s director of enterprise services and now serves as managing director of professional services firm Alvarez & Marsal, says that the principles he describes in the book were embedded in the corporate culture by founder Bezos and remain the “core tenets on which company leaders are rigorously rated during their annual performance reviews and self-evaluations.” Here are a few that are especially notable—not so much for their commonsensical nature, but for the diligence with which the company pursues them: Read more »

Posted by & filed under Business, Content - Highlights and Reviews, emotional intelligence, leadership, management, managing people, managing yourself.

By Theodore Kinni

Theodore Kinni has written, ghosted, or edited more than 20 business books. He was book review editor for strategy+business for 7 years.

It’s hard to imagine anyone who has influenced the discipline of leadership to a greater degree over the past 20 years than psychologist, consultant, and author Daniel Goleman. Goleman is best known for popularizing the concept of emotional intelligence (EI) with his book of the same name, which was published in 1995. Since then he’s been exploring a set of learnable capabilities relating to “how well we manage ourselves and our relationships” that can be developed to enhance personal and organizational performance. Read more »

Posted by & filed under Devops, Information Technology, infrastructure, IT, performance, Tech.

This post will detail what I learned about PostgreSQL temporary tables, how to take some preventative measures, give some examples of how to test your work using Docker, and why I cared at all.

First, some background

I was awoken at 3am by PagerDuty telling me the disk for one of our database nodes was filling up.

After logging in, I saw several large SELECT statements that had been running for hours.

I determined that the temporary table space was filling up the OS disk, and fast.

I decided to kill the queries because they endangered the entire health of the system. Had the OS disk filled up, the entire system would be down and we would risk data corruption.

Later, in a post-mortem meeting, we asked how the temporary table space had begun to fill the disk and what actions we could take to prevent this problem in the future.

What are temporary tables?

postgres90

There are a few situations where PostgreSQL saves temporary files, ones that are not critical to database operation. Tables created using CREATE TEMPORARY TABLE and their respective indexes are one source. Probably more importantly, when the database is doing a query that involves a sort operation, and the data exceeds work_mem, temporary files are created for that purpose. So in situations where your users will be doing lots of sorting of large tables, like in a data warehouse, there can be quite a bit of activity going to disk for this purpose.

One interesting property of temporary files is that they’re prime candidates for storage even on less reliable drives, such as you might have your operating system on. It’s possible to safely, and sometimes quite usefully, put the temporary files onto a directory on your OS disks if they are underutilized. Just be careful because if those disks are lost, you’ll need to recreate that empty tablespace on the OS drives of the replacement, or remove it from the temp_tablespaces list.

What happens when your temporary table space fills up?

Because we can move the location that Postgres uses to write temporary table information, a good preventative measure would be to use a disk or partition separate from our OS or database data storage. That way, the temporary space filling up will not impact other pieces of our system.

But what does happen when that isolated temporary space fills up? Will Postgres keel over? Will the query hang? Are we just moving the problem and not really making the overall system more resilient? To answer these questions I did an experiment.

Experimenting via Docker

To understand more about Postgres’ actual behavior in stressful situations, I setup a local Docker container and installed Postgres on it. The goal of this experiment is to watch Postgres fill its temporary table space and observe what happens.

Establish a testing setup

The first thing I did was create a new space for Postgres to write its temporary table space to. I wanted this space to be small so that the failure would occur more quickly and so I could observe the results. To do this I created a loopback file system that was only 1MB in size and mounted that in the Docker container. This seemed like the easiest way to test my idea of putting temporary table space on a different disk. Then I made a directory owned by postgres inside of it for the tablespace to go into, as Postgres will need to be able to write to this directory.

Edit the postgresql.conf in your favorite editor (the configuration file is located at /etc/postgresql/9.1/main/postgresql.conf on my system).

First I set the work_mem to 64k. This is the smallest value Postgres will allow and ensures that my tests will fail quickly, as this value will constrain the amount of work Postgres can do in memory.

Next I set the log_temp_files to 0. This handy configuration value will tell Postgres to log each time it writes to the temporary table space on disk.

We’re finished with setup after setting the temp_tablespaces value:

That table space needs to be created in Postgres, which is where the setup for this experiment is headed next.

But first start or restart the Postgres daemon to load the configuration changes we have made.

Then I connected to the running Postgres daemon on my Docker container as the postgres user and with the psql command.

After that, I created a Postgres TABLESPACE .

Note that the table space name pgsql_temp_tblspc matches the value we gave the the temp_tablespaces option, and that the location /mnt/pgsql_tmp/pg_tblspc matches the directory we created earlier and gave permissions to.

Now that I had a weakened, instrumented Postgres, I created a test database and table and loaded some data.

First I created a test database named test and connected to it.

Then I created a table in my test database and named it test.

I then executed a Perl one liner to generate some data and used the COPY command from inside Postgres to load that data into the test table.

If you have a copy of some production data you would like to play with, that would work as well. This data set is tailored to my experiment, and ensures that Postgres will have to do some sorting that will quickly spill over into temporary table space.

Now we test

Now that we have an instrumented, special setup, we can test with increasing amounts of work_mem needed:

Note the “Sort Method” used in these two cases. The first (100 rows) can be done all in memory, the second (1,000 rows) needed to write some portion to disk. The important bit here is that because our query asks Postgres to order the results by our “random_text” column, Postgres is forced to do some sorting that quickly requires more temporary table space than can fit in the reduced work_mem we set earlier.

If you’re quick enough you can see the temporary table space filling up as the query runs in /mnt/pgsql_tmp/pg_tblspc/PG_9.1_201105231/pgsql_tmp/.

Finding the answer

Based on my first tests, I ran a query that I knew would take more than 1MB of temporary table space (100,000 rows) and watched it fail gracefully, log an error message, and clean-up after itself.

This to me is the best case scenario: the query used and filled the temporary table space I allocated for it, then Postgres killed it, cleaned up the disk space, and logged to the console and the log file what happened.

In this scenario no one would need to be woken up at 3am. We could send these log messages to our friends at Logentries, set an alert for that log pattern, and alert a HipChat room. That way in the waking hours a less sleep deprived human could track down how that query went off the rails and remedy it.

Afterword

I hope that this experiment was as enlightening to you as it was to me about Postgres temporary table space, and that you are now armed with more knowledge to battle those nasty 3am disk usage alerts from PagerDuty on your Postgres database servers.

 

I used Kitematic for the Mac to test these theories with Docker and it was super easy and fast.

I shamelessly liberated the work_mem testing methodology from: http://www.depesz.com/2011/07/03/understanding-postgresql-conf-work_mem/

If we were using Postgres 9.2 we could use this setting:

http://www.postgresql.org/docs/9.2/static/runtime-config-resource.html

temp_file_limit (integer)
Specifies the maximum amount of disk space that a session can use for temporary files, such as sort and hash temporary files, or the storage file for a held cursor. A transaction attempting to exceed this limit will be cancelled. The value is specified in kilobytes, and -1 (the default) means no limit. Only superusers can change this setting.

This setting constrains the total space used at any instant by all temporary files used by a given PostgreSQL session. It should be noted that disk space used for explicit temporary tables, as opposed to temporary files used behind-the-scenes in query execution, does not count against this limit.

Posted by & filed under Content - Highlights and Reviews, Daily Learning, Learning & Development, Safari.

“We often assume that a complicated problem must have a complicated solution.” – More Fearless Change: Strategies for Making Your Ideas Happen

Last week, I woke up early to get on the phone with a client in Manhattan.
daily learning in new york

She told me her teams don’t read email, and she needs a new way to spread the word about what daily learning resources are available to them.

I asked her: “If they’re not communicating with email, what are they doing to encode and decode transmissions, (a.k.a. communicate)?”

Hipchat,” she said.

Read more »

Posted by & filed under Devops, infrastructure, IT, Operations, programming.

What is Chef Provisioning?

Chef Provisioning is a drop-in library for Chef that gives developers and infrastructure teams an added dimension of automated system configuration: the ability to bootstrap and install a series of OS and configuration deployments onto “bare metal”. There are a variety of drivers that can be used as bare metal abstractions, including Docker, LXC, Fog (EC2 / DigitalOcean / OpenStack), AWS, Azure, Vagrant, VSphere, DigitalOcean, Hanlon, OpenCrowbar, and SSH. Chef Provisioning, combined with your choice of these drivers, provides a number of new abilities:

Read more »

Posted by & filed under Business, careers, Content - Highlights and Reviews, leadership, leading teams, management, managing yourself, Personal Development.

By Theodore Kinni

Theodore Kinni has written, ghosted, or edited more than 20 business books. He was book review editor for strategy+business for 7 years.

It’s easy to see the top of the corporate ladder but successfully making the climb is an increasingly challenging undertaking. After years of rightsizing and delayering, steps leading to the top of ladder are fewer and farther apart than ever. And when you get a chance to stand on them, you better make the most of it—there are plenty of people climbing the ladder behind you.

How can you do that? Mark Miller, who since 1977 has climbed the corporate ladder from hourly team member to vice president of leadership development at Chick-fil-A, the $5 billion fast serve restaurant chain, says you have to raise your game. Read more »