The new soul of the insight economy: A Datapalooza dispatch
The first Datapalooza is history by now, but it was history in the making. The San Francisco Datapalooza was, after all, simply the inaugural event in a series of conferences that will come to other cities beginning in 2016.
For an in-depth look at the hottest new data science tools and techniques, Datapalooza is the place to be. The conference may be over, but I’m still unpacking everything I learned during its sessions.
If you weren’t there on the downtown campus of Galvanize University, catch up on what happened at the event by reading my Datapalooza dispatches. Examine a data engineering project presented by Silicon Valley Data Sciences, find out what app development projects are under way at the Spark Technology Center—several of which featured in Datapalooza sessions—and internalize the chief takeaways from the San Francisco Datapalooza while discovering what the near future holds for the conference.
Turn learning into application
In this dispatch, I’ll elaborate on the rationale for Datapalooza, describing its value to the new generation of data scientists who are powering the insight economy. And you’ll be hearing this straight from the horse’s mouth—I was a member of the team that conceived, shaped and orchestrated the program for which this first Datapalooza has served as a proof of concept.
Let’s start with what Datapalooza is not. Datapalooza is not a university program in data science that issues degrees or certificates such as students pursue in a course of studies at Galvanize University. Nor is it among the hackathons sponsored by IBM or other organizations, during which competing teams of data scientists spend a tense weekend hunkered down on projects in hopes of winning a cash prize and peer recognition. It isn’t a meetup of data scientists that offers the chance to listen to inspiring speakers for a few hours after work. And it certainly isn’t online courseware designed to help aspiring data scientists bootstrap their understanding of machine learning, Spark, Hadoop or other such subjects.
What, then, you might ask, is Datapalooza? Datapalooza is a community event at which attendees rapidly build data products, drawing on an immersive in-person curriculum that forms the backbone of the three-day agenda. Each day of Datapalooza culminates in a 90-minute build and share session in which participants show off what they’ve produced so far. Before the end of the third day, every Datapalooza participant can expect to have built a high-quality data application whose utility is comparable to that of the apps demoed in the breakout sessions.
Find your place in the insight economy
Real-time community engagement with the expectation of quick results can test the mettle of the best data scientists, but it can also spur them to new heights of creativity and productivity. Best of all, in-person sharing among peers can help data scientists pool their creativity to produce startling new syntheses. Datapalooza is not a hackathon: No contest will be won, nor any cash prizes awarded. It is not a course of study: Participants will not earn credits, certificates or diplomas. Nor is it a meetup: All data scientists in attendance—not just featured presenters—will demonstrate their knowledge.
At Datapalooza, data scientists build their apps for a single reason: to show themselves—and others—what they can achieve when they have the right tools, the right data, reliable guidance and accurate feedback. The people who attend Datapalooza are the soul of the insight economy. They love data—they live and breathe it—and they doggedly mine it and model it in search of fresh insights.
The San Francisco Datapalooza is over, but the Datapalooza has just begun. Stay tuned for news about when Datapalooza will be coming to a city near you—it’s all part of our effort to engage the world’s brightest data scientists wherever and whenever is right for them.
While you wait, get involved in the data science community. To get started, engage with the Spark Technology Center—it’s the perfect entry into the insight economy, offering you the chance to contribute projects, design and code to Apache Spark.
For an in-depth look at how Spark has become a powerful tool in the hands of a new breed of citizen data scientists, check out this informative IBM Analytics resource page.