If you’ve used Chef, you’ve probably used a community cookbook. Community cookbooks are helpful because someone else has figured out how to solve your problem, be it installing nginx or configuring postgresql. While community cookbooks are great, they sometimes don’t include everything that you need. That’s where wrapper cookbooks come in. If you want to change or extend the functionality of a cookbook without having to rewrite it from scratch, a Chef wrapper cookbook is the way to go. Read more »
You’ve probably noticed we’ve made some changes around here, including a brand new design and sharp new logo.
Safari began more than 13 years ago as “Safari Tech Books Online,” with the promise of replacing the collection of IT and programming reference books on your shelf with something online and searchable. (As the saying goes, “You can’t grep dead trees.”) Read more »
A few months ago I noticed the JUnit Attachments Plugin and was inspired. Recording pictures and other files after test failures is such an obviously good idea, especially if you’re the one who has to fix the tests.
Unfortunately, the JUnit Attachments Plugin has some rough edges. I wrote a small library that handles the busywork while keeping tests readable. Since we are awesome, we open sourced it so you can use it too!
Safari is seeking help in usability testing for Safari Tutorials, our curated learning paths based on Safari books and videos. You will be speaking with folks from our product development team, and your input will directly influence how the product evolves in the next few months. That’s pretty cool, don’t you think?
Tests will be performed remotely via a Google Hangout video chat, running for 30 to 45 minutes. Session times are available on June 18 and June 19.
If you are interested in participating, we ask that you take a few minutes to complete this short questionnaire.
Those selected for usability testing will receive a $25 gift card (iTunes or Amazon) upon completion of the session.
Thank you for considering participating in our study. Feedback from our community is essential as we build products that help our users learn and grow. Plus we just like speaking with you.
I originally presented this talk on ebook markup to an audience of ebook developers and publishers. As someone who cares deeply about accessibility and discovery, it’s a subject that tends to get me agitated, but I tried to be extra-polite because my audience was Canadian.
My hope is that as web-based book resources like Safari continue to proliferate, publishers will take advantage of the opportunities afforded by many years of research by the web community into what makes content semantically rich, accessible, and competitive with the wealth of free material available on the open web.
We’ve made quite a few changes lately to the search engine and interface on Safari Flow. These changes were motivated in no small part by the evolution of the application itself. When Flow launched last summer, it included a highly curated collection of approximately 250 books and videos, primarily aimed at web developers. We’ve more recently decided to expand the scope of that content, and the original 250 has increased 100-fold — almost exactly, in fact: today, we’re rapidly approaching 25,000 titles on a myriad of topics from dozens of publishers.
The evolution of searching in Flow
Searching across 250 titles is not the same as searching across 25,000. We’ve addressed this shift with a number of refinements, some small and some large. Among the smaller changes we’ve made are adding autocomplete, giving users the option to search across specific indexes (author, title, published, and ISBN), and some retuning of how search results are weighted (one example: book titles are now weighted slightly more heavily than chapter titles).
More drastically, we refactored the interface of the results page. When we had fewer titles, the results page was quite simple and emphasized individual chapters over complete titles.
Search results in the new interface are much richer. Because users have so many more results from which to choose, we aimed to provide tools and information to make choosing easier. For example, results are shown in context, and the user has more filtering options. A user can filter out specific types of media, and sort on relevance or publication date. She can drill down further by clicking on a specific author or publisher. And perhaps the best part: search results are returned twice as quickly.
Reviewing the results
Our hypothesis was that providing users with a richer search experience would make the site easier to use and generally more engaging. Roughly 81% of logged in visits include a search of some kind (40% of which originate on the home page), so it’s important to us that we get it right. So how are we doing?
The new search interface has been on the site for a couple of weeks now, and we’re encouraged by the some of the early analytics. For example, the number of times users search again after the initial search has decreased by 7%. So they are finding what they want more often. Great!
We’ve also seen an uptick in engagement. For example, the amount of time a user stays on the site after searching has increased on average by over a minute. The average number of pages a user views after searching has gone up by a full page. We see these data points as signs that we are doing something right.
Not all of the analytics are unambiguously good or bad. One of the more perplexing data points is related to a new autocomplete feature. Specifically, autocomplete has not consolidated the search terms. Before we added autocomplete, we saw 17,824 unique search terms and 26,814 unique searches, which is 66% unique. Since we added autocomplete (using a smaller sampling size), we’ve seen 10,689 unique search terms and 13,262 unique searches, which is 81% unique. We’re not quite sure to make of that!
This is of course not the end of the story. We know we still have a lot of work to do so that Flow’s search serves users accurately and quickly. Our users regularly write in to suggest changes, and we’re listening. And as you can see in this post, we’re making sure to measure the results of the choices that we make. In addition to the usage on Flow, we have the further benefit of over a decade’s worth of search-related data from our Safari Books Online platform.
We use Test Kitchen to test our continuous integration software at Safari Books Online. We run it on LXCs on both vagrant and testing nodes in our continuous integration pipeline. I was curious to see what was new with Kitchen CI and what Fletcher had to say. We use the framework in a standard way, though I learned the
command, which is short for ‘kitchen diagnose’, and helps diagnose errors within Test Kitchen. I also learned how easy it is to test operating systems outside our current production systems with Kitchen CI. This would be useful to test cookbooks that we planned on releasing to the community. We could get a sense of what other OSes were already supported and what work would need to be done in order to support them if not.
We use Berkshelf to manage our cookbook dependencies. We love it and it works well. Jamie talked about the vision for Berkshelf 3.0 and how the product has been improved. I downloaded and installed the new Chef DK to play with which included both Berkshelf 3 and Kitchen CI. His talk details how artifacts are more strictly regulated in Berkshelf 3. Jamie also spoke about an online game platform he build using Elixir. He will be speaking more about it at the Atmosphere Conference in Poland.
Rachel said something that has stuck with me and I have repeated many time since. To paraphrase,
Automation Software is the Codification of Institutional Knowledge
I love that idea. Chef recipes are self-documenting processes for other engineers to learn from. What was once stored inside one person’s head is now codified with recipes. I love the concept!
My big take-away from this talk:
Automating everything allows your business to pivot.
Big and small companies need to pivot; markets change and businesses change. If your infrastructure creation is automated, you can pivot from one direction to another more easily and in a safer, testable way.
Justin gave a great talk on automation software in our contemporary corporate culture. He used the story of Moby Dick to illustrate different struggles we face. This was a delightful talk with fantastic artwork by Matt Kish.
It is easy for departments to live in silos. The current trend is to find ways to diminish the time wasted handing off tasks from one department to another. We’ve taken some of the recommendations to heart at Safari Books Online already. Historically, we’ve supported two Jenkins servers: one for IT and one for Engineering (Development). We have two source control repositories. We did this with good intentions: to stay out of each other’s way. For new projects that involve both departments, we try to consolidate source control and testing to the Engineering infrastructure. We now have software developers writing deployment cookbooks, which is exactly the direction we want to go. We have a long way to go, but this is a start.
Last summer I taught an accessibility tutorial at OSCON with Denise Paolucci. After the tutorial (which has training materials on GitHub), I was speaking with attendees about what else they wanted from accessibility training, and I had an epiphany. An OSCON epiphany.
To maintain accessibility, we need more people choosing to become the accessibility owner for a product. So I challenge you, the accessibility advocate or coder looking for a home, to become the accessibility advocate for your product at !DayJob, or to pick an open source project you use and become its accessibility hero. If you have an open source tool or platform you use all the time, I bet you dollars to boston creme donuts they either don’t have a contributor dedicated to accessibility, or their existing accessibility people want resources. And they likely would love an accessibility manager — especially if you volunteer to write the patches!
What you can contribute depends, obviously, on your skill set. But you don’t need to be a developer or a designer to be the Accessibility Hero at your organization.
You’re a coder, and you care about accessibility, but you don’t know much about it
Now is a great time to start! Start reading on the WebAIM mailing list and website until you have a handle on the interaction between theory and practice. Once you’re comfortable, work with the project leads to focus on what you and they agree are some nice, high-value low hanging fruit. You don’t have to start with the tougher stuff: visualizations, mapping, animated games. Pick something straightforward to begin with (adding alt text to the product logo?), and build up from there.
You’re an accessibility advocate, but don’t know much about coding
You will be a huge asset to an project by becoming the accessibility wrangler. Create and garden the accessibility section of the bug tracker. Find devs to fix those bugs, and put them in touch with the best documentation or with potential testers. Write documentation explaining why and how. Create project best practices. Learn how to test for accessibility yourself (it’s harder to test for than it is to code for), and become a QA tester. Seek out users with disabilities and convince them to become programmers, QA testers, and active bug reporters. I can’t overstate the importance of having someone like that on your project team. 
- Pick a project you care about! One you use all the time — an IRC client, an editor, a social media tool, a learning management systems, an IDE — will make each of your fixes more rewarding for you. I promise you there is not a single software product out there (including the platform where I co-lead the Accessibility Team, Dreamwidth) with perfect accessibility.
- Don’t just appear out of nowhere and start pestering existing contributors about accessibility. You want buy-in from the existing team, so approach the contributors in their forum of choice and tell them you’re ready to start committing accessibility patches or testing existing code, and you want to know if they have anywhere in particular they’d like you to begin. If this is for a project at !DayJob, speak with project managers and team leads and make sure you have their support.
- Seek out users with disabilities who have expressed the kind of interest that makes it seem possible they might be willing to be testers, coders, or bug reporters. They might already be there and organized, in which case, hooray! Your job is that much easier.
- Make a forum where developers, designers, and users can talk exclusively about accessibility issues. The type of forum doesn’t matter. IRC, mailing list, twitter, or a blogging platform are all fine, as long as your users with disabilities can access the platform.
- If users with disabilities report something that sounds like it’s not a bug to you, listen anyway. They might be mistaken, but trust that you don’t have their experience. Using computers with adaptive tech can be exhausting, and the tiniest roadblocks become major.
- Remember to code for keypress, keyup, keydown, and focus events when you are looking at hover, mouseclick, etc events.
- If you create fake links or other HTML elements using spans or divs plus JS, make them accessible by adding a tabindex attribute, a role attribute, and coding for keyboard access as in point the first. But default to native HTML wherever possible.
 Try WebAIM and their great mailing list, as a good starting point. If you spend some time on the #accessibility and #a11y hashtags on twitter you’ll see some great conversations, as well. Learn what WAI-ARIA can and can’t do for you. Start there and branch out to the resources most useful to you. Understanding the Web Content Accessibility Guidelines (WCAG) and other standards is important for experts, but if you begin by reading standards, you’ll get overwhelmed, bogged down in details, and be distracted from the balance between following standards and creating usable, accessible sites.[Back]
 Well, I can overstate it. Having an accessibility wrangler won’t help you defeat giant lizards attacking the Pacific Coast or defend your Death Star from those pesky rebels. But it will make coding for accessibility a pleasure and delight. [Back]
Last week in Portland, I attended Monitorama, a conference on open Source monitoring. Speakers and attendees demoed fancy new software, shared personal experiences, and – in true cross-disciplinary learning fashion – showed off lots of cool math! Keep your eye on this Vimeo page for forthcoming videos from the conference. In the meantime, let’s run through some of the conference highlights.
The current state of monitoring
Unfortunately, the current state of monitoring systems isn’t much better than it was 10 years ago. We still monitor systems using the same software, running on a monolithic server, with the same set of probes that have little to no intelligence built into them.
We still dread being on call, and all of us suffer from the same on-call fatigue — missing sleep, missing time with friends and family, and having a Pavlovian fight-or-flight response to that familiar buzzing or ringing of the on-call phone.
We still tell new hires, “Look at these graphs and familiarize yourself with how things look.” The very same graphs that we stopped looking at after a few months on the job!
There’s hope for the future
It’s not all doom and gloom, mind you. We can use science! to predict failures and find anomalies in our systems. Talks from Toufic Boubez, Noah Kantrowitz, Dr. Neil J. Gunther, and Baron Schwartz focused on taking a more scientific approach to monitoring, but they also emphasized that there is no silver bullet. For example, I can’t check out some magical project from GitHub, install it, and watch it crunch all my metrics that I can then use to spit out intelligent predictions and analysis.
However, there are steps we can take now to improve the current state of things:
- Build robust monitoring systems – Monitoring systems should be as robust or more so than the things they monitor. Too often we run our monitoring system as a single point of failure. If a system failure is critical enough to wake us up at 2 am, then why don’t we architect monitoring systems that are themselves fault tolerant? Build in redundancy, regularly test the systems, and have multiple paths of communication in case of failure. Monitor the monitoring systems themselves for failure.
- Monitor if work is happening – If the site is up, do you care at 2 am that the load average spiked? Use that information for reporting and trend analysis in the waking hours, but don’t wake anyone up to tell them everything is fine.
- Avoid alert fatigue – We want the phone going off to mean something, so ditch or turn down everything but the truly critical states. Work with your peers to create a system in which you can temporarily hand off responsibility for discrete periods of time (bathroom, shower, putting kids to bed, or you had a rough night and just need a nap). Make this system as simple and reliable as possible.
- Escalate quickly and be persistent – If something really is critical, then our systems should notify us loudly and often. They should over communicate – send a page, send an email, pipe something to a chat – so that other people can assist and gather data.
- Create hand-off reports – Communicate recent alerts to the next person or team on duty, so everyone knows the current state of things and what problems to expect. These reports can also help you track what problems are recurring week over week.
- Rely more heavily on an open chat stream – For communications during an outage, integrate alerts into chat to build a timeline of an outage. The chat can later serve as a log of what steps were taken, and provide you with a chronology of the outage. Use the chronology to review what happened and help decide how things could have gone better.
- Create playbooks for outage response – A “playbook” helps with shared knowledge and documentation across teams, and it also helps your future 2 am self when your brain won’t be working as well. You’ll need all the help you can get.
One theme that was persistent throughout the conference was a repeated call to concentrate on data analysis, instead of piling on more and more monitoring. This focus on analysis is a distinct shift from a past in which we wanted tools that would give us more data about our systems. Now we have the data (maybe too much), and we need tools to help us analyze, store, and process that data. Tools like Graphite, ElasticSearch, Kibana, Logstash, and Heka (shameless self-promotion) are the keys to making sense of all that our systems are trying to tell us.
Another theme of the conference was the growing adoption of the Lua and Go programming languages. Lua is a favorite for its speed and ease of embedding into existing applications. Go is favored for its concurrency and speed. Go is a compiled language which in itself is a trend, marking a shift away from interpreted languages. Go code compiles extremely fast, and in our heterogeneous environment, a single, compiled binary is a welcome change over the multitude of version and language dependencies we have to manage for other projects.
Some wicked cool software
There was a lot of interesting software presented at Monitorama. The most interesting to me though were these gems:
Flapjack – monitoring alert and routing system (http://flapjack.io/)
- Segregates responsibility for self-service monitoring
- Takes input from multiple monitoring sources, aggregates them into Flapjack, and then outputs them to multiple destinations
- Includes end-to-end self-testing and alerts if messages are not flowing through the system
That last bullet is really important to me. If you’re going to put all your critical alerting data into a system, it better have a way of telling you when it fails.
Dashing – easy dashboard generator (http://dashing.io/)
- Runs on a Raspberry Pi or Chromecast.
- Uses server-sent events
- Is not a third party you send your dashboard data to (i.e., you own your own data)
Wiff – packet processing pipeline (https://github.com/wayfair/wiff)
- Provides network analysis
- Is basically a real-time pcap to JSON converter
- Allows you to use all the cool existing JSON import tools for analysis
Ideas I would like to pursue here at Safari
I’d like to see us start thinking of monitoring as a service. Can we create an alert and log message pipeline so that others in the company can subscribe to certain portions and build tools? Can we make tools that allow for monitoring and message passing to be self-service so that operations isn’t a bottleneck for setting up monitoring and logging, and some of the responsibility can be shared with the appropriate people. I imagine a system where the data flows freely and really smart people (way smarter than I) make awesome internal tools.
|Effective Monitoring and Alerting describes data-driven approach to optimal monitoring and alerting in distributed computer systems. It interprets monitoring as a continuous process aimed at extraction of meaning from system data. The resulting wisdom drives effective maintenance and fast recovery – the bread and butter of web operations.|
|In Advanced Mathematics for Applications, Andrea Prosperetti draws on many years’ research experience to produce a guide to a wide variety of methods, ranging from classical Fourier-type series through to the theory of distributions and basic functional analysis.|
|Lua offers a wide range of features that you can use to support and enhance your applications. With Beginning Lua Programming as your guide, you’ll gain a thorough understanding of all aspects of programming with this powerful language. The authors present the fundamentals of programming, explain standard Lua functions, and explain how to take advantage of free Lua community resources. Complete code samples are integrated throughout the chapters to clearly demonstrate how to apply the information so that you can quickly write your own programs.|
|With Programming in Go: Creating Applications for the 21st Century you’ll learn how today’s most exciting new programming language, Go, is designed from the ground up to help you easily leverage all the power of today’s multicore hardware. With this guide, pioneering Go programmer Mark Summerfield shows how to write code that takes full advantage of Go’s breakthrough features and idioms.|
We’ve had lots of requests for offline reading in Safari Flow, and I’m excited to share that we’ve started development work on the native apps* that will offer offline first.