标签 - tech

文章 feed - 评论 feed

2014-02-15

TrueVault Launches To Bring Easy HIPAA Compliance To Startups And Health Apps | TechCrunch

Comments: "TrueVault Launches To Bring Easy HIPAA Compliance To Startups And Health Apps | TechCrunch"

URL: http://techcrunch.com/2014/02/14/truevault-launches-to-bring-easy-hipaa-compliance-to-startups-and-health-apps/


It was the best of times, it was the worst of times: In an effort to jumpstart the U.S. economy amidst the runaway blight of the “Great Recession” and financial crisis beginning in 2008, Congress scrambled to enact and then distribute its unprecedented and controversial $787 billion economic stimulus package. Among other things, the Stimulus Bill acted as a vehicle for another landmark piece of legislation, the HITECH Act, which sought to lay the foundations for sweeping healthcare reform.

Not only did the HITECH Act aim to encourage the bloated healthcare industry to lower costs and adopt healthcare information technology and electronic health records, it brought key changes to HIPAA privacy and security provisions as well. In January, these changes were finalized, and they important implications for all digital health companies, technology providers and app developers.

The rule changes (and the rules themselves) are complex, and they require startups and engineers to put in a lot of work to maintain compliance. In healthcare, where the need for efficiency-increasing, cost-reducing technology (and more engineers) is paramount, this is a problem. In a lot of cases, rather than take the time to become HIPAA-compliant, startups and developers are pairing back the features and functionality of their applications. This reduces the overall value proposition of the product and strips it of an important part of the feedback loop.

Luckily, TrueVault has your back. Launching out of Y Combinator’s most recent batch of startups, TrueVault is on a mission to unburden startups of the time-consuming, progress-stalling process of HIPAA compliance so that they can get back to focusing on what’s really important: Fixing the healthcare experience.

Over the last two years, there’s been an explosion in mobile health apps. The problem, however, is that many of them are crap. Some of them are just clones, but many of them lack the kind of functionality that people want out of a mobile health app. The average consumer wants to access health information, not uncontextualized data, but the new changes to HIPAA require compliance from apps and technology that delivering health information.

TrueVault wants to solve this problem by offering a secure API to store health data and simplify the complexity of HIPAA compliance. The idea is to save startups hundreds of development hours by ensuring that they can avoid worrying about setting up and maintaining a HIPAA-compliant application stack. Instead, TrueVault handles all physical and technical safeguards required by HIPAA, while working like the majority of API services, says co-founder Trey Swann.

TrueVault targets startups, web and mobile apps and wearables, enabling them to store and search protected health information (PHI) in any file format through RESTful APIs. It will sign a “Business Associate Agreement, and protects customers under a comprehensive Privacy and Data breach insurance policy,” as HIPAA is wont to make everyone do.

Now of course, you may say: “But, Rip, there are plenty of HIPAA-compliant hosting providers. What about those?” Touche, my friend. Touche. Familiar names like AWS, FireHost and RackSpace all offer HIPAA-compliant posting and will sign a BAA. So, you could move your applications and health data over to one of the big players.

Many startups are facing this “build vs. buy” decision right now. That’s why co-founder Trey Swann sees big opportunity for TrueVault. The value proposition that TrueVault claims over HIPAA-compliant hosting providers, he says, is that they still require companies to spend months building a HIPAA-compliant app stack in that environment, which require a laundry list of technical specifications.

The other benefit is cost. If a company wants to sign a BAA with AWS, it needs to use dedicated instances and each instance hour is 10 percent more than the standard fee. Plus, their meter starts at $1,500/month if they want to become HIPAA-compliant with AWS (meter starts at $2/instance hr, over a month it is approx. $1,500). FireHost, on the other hand, starts at $1,115/month and you are charged a $250 premium for each HIPAA-ready instance that’s added.

Instead, TrueVault is offering its service at a fairly competitive price point: $0.001/API call. Yes, that’s 100K calls for $100. Swann says that unlimited file and JSON storage are included in that price. Not bad for a service that offers automatic encryption of all data stored, APIs for searching that encrypted data, audit tracking, proactive monitoring, hashes, uptime and SLA.

The key, though, is search. In order to be compliant with HIPAA, apps have to encrypt their databases, which means your app can’t search their data, and the functionality suffers as a result. TrueVault’s service protects your data and also allows you to query that protected data. Companies can get unlimited file and JSON storage, and search any JSON document and binary field, or have their apps call TrueVault’s Search API directly to quickly add a search interface to their apps.

Today, TrueVault has about 5 million documents stored on its platform and millions of API calls are being made to its APIs every week. The startup has already signed on nearly 200 companies, including image32, LifeVest Health, Weave and Rocky Mountain Health Plans and is growing fast.

For more, check out TrueVault at home here.

Original post on Hacker News

Using pry in production - Without shooting yourself in the foot - Bugsnag

Comments: "Using pry in production - Without shooting yourself in the foot - Bugsnag"

URL: https://bugsnag.com/blog/production-pry


Using pry in productionWithout shooting yourself in the footConrad Irwin

Bugsnag has been using pry as a replacement for ruby's irb console since before I joined (disclaimer: I'm one of the pry core team). It's better than irb for a number of reasons, but chief among them are that it syntax highlights input and output, and it crashes less often.

This is most useful in development, when you spend a considerable amount of time in the console, but it's also useful in production when you need to analyze (or even fix) production data. Here are some simple instructions to get you started.

1. Include pry in the Gemfile

Making pry work in production is easy, you just add it to the production part of the Gemfile. We also use pry-plus, but we restrict that to development because it's less useful in production and some of the plugins are less well tested than pry itself.

# Gemfile
gem 'pry-rails'
gem 'pry-plus', group: :development

Once you've changed the Gemfile, run bundle and then deploy. Pry will be available in production.

2. Change the prompt in production

Using the same console in development as production had an unforeseen problem. The prompt looked the same wherever you were. This led to code being run in production that was intended for development.

No permanent damage was done, but we decided that we should make it really really obvious which environment you're running in. To this end, we added a rails initializer that sets the prompt to something scary in production.

# config/initializers/pry.rb
# Show red environment name in pry prompt for non development environments
unless Rails.env.development?
 old_prompt = Pry.config.prompt
 env = Pry::Helpers::Text.red(Rails.env.upcase)
 Pry.config.prompt = [
 proc {|*a| "#{env} #{old_prompt.first.call(*a)}"},
 proc {|*a| "#{env} #{old_prompt.second.call(*a)}"},
 ]
end

Now it's obvious when you're in production :).

3. Wrap it in a script

Running pry in production is surprisingly fiddly, particularly if you're trying to do it to answer an urgent question. You need to remember which directory to run it from, to use bundle exec, and to pass production as an argument.

To make this easier to do we added a production-console command to our chef recipes. All it does is open pry with all the options set.

#/usr/local/bin/production-console
cd /apps/bugsnag-website/current
bundle exec rails console production

4. Designate a 'pry' machine

This solved most of out problems, until one afternoon we noticed that one of our web servers was performing signficantly less well than the others. I logged into the machine to have a look around, and I found that a large portion of its CPU and RAM were being used by a pry session someone had left in a screen.

Now we only ever use pry on our monitor machine. The monitor machine usually spends its time doing non-critical things, like running monit, graphite, and cron. As some of the cron jobs we do require our rails codebase, production-console already worked on that machine, so it was the obvious choice.

To ensure that I don't forget which machine to ssh into, I have an alias defined in my ~/.zshrc that lets me run production-console on my laptop.

# ~/.zshrc
alias production-console='ssh -t monitor zsh -l -c production-console'

I hope that makes your console better in production. If you have other tips on improving pry in production, let us know @bugsnag.

Create Your Bugsnag Account

Bugsnag captures exceptions in real-time from your web, mobile and desktop applications, helping you to understand and resolve them as fast as possible.

You should create a free Bugsnag account today.

‹ return to posts

Original post on Hacker News

Internet troll personality study: Machiavellianism, narcissism, psychopathy, sadism.

Comments: "Internet troll personality study: Machiavellianism, narcissism, psychopathy, sadism."

URL: http://www.slate.com/articles/health_and_science/climate_desk/2014/02/internet_troll_personality_study_machiavellianism_narcissism_psychopathy.html


The Internet is sadists' playground.Medioimages/Photodisc

In the past few years, the science of Internet trollology has made some strides. Last year, for instance, we learned that by hurling insults and inciting discord in online comment sections, so-called Internet trolls (who are frequently anonymous) have a polarizing effect on audiences, leading to politicization, rather than deeper understanding of scientific topics.

That’s bad, but it’s nothing compared with what a new psychology paper has to say about the personalities of trolls themselves. The research, conducted by Erin Buckels of the University of Manitoba and two colleagues, sought to directly investigate whether people who engage in trolling are characterized by personality traits that fall in the so-called Dark Tetrad: Machiavellianism (willingness to manipulate and deceive others), narcissism (egotism and self-obsession), psychopathy (the lack of remorse and empathy), and sadism (pleasure in the suffering of others).

It is hard to underplay the results: The study found correlations, sometimes quite significant, between these traits and trolling behavior. What’s more, it also found a relationship between all Dark Tetrad traits (except for narcissism) and the overall time that an individual spent, per day, commenting on the Internet.

In the study, trolls were identified in a variety of ways. One was by simply asking survey participants what they “enjoyed doing most” when on online comment sites, offering five options: “debating issues that are important to you,” “chatting with others,” “making new friends,” “trolling others,” and “other.” Here’s how different responses about these Internet commenting preferences matched up with responses to questions designed to identify Dark Tetrad traits:

E.E. Buckels et al, "Trolls just want to have fun," Personality and Individual Differences, 2014.

To be sure, only 5.6 percent of survey respondents actually specified that they enjoyed “trolling.” By contrast, 41.3 percent of Internet users were “non-commenters,” meaning they didn’t like engaging online at all. So trolls are, as has often been suspected, a minority of online commenters, and an even smaller minority of overall Internet users.

The researchers conducted multiple studies, using samples from Amazon’s Mechanical Turk but also of college students, to try to understand why the act of trolling seems to attract this type of personality. They even constructed their own survey instrument, which they dubbed the Global Assessment of Internet Trolling, or GAIT, containing the following items:

I have sent people to shock websites for the lulz.

I like to troll people in forums or the comments section of websites.

I enjoy griefing other players in multiplayer games.

The more beautiful and pure a thing is, the more satisfying it is to corrupt.

Yes, some people actually say they agree with such statements. And again, doing so was correlated with sadism in its various forms, with psychopathy, and with Machiavellianism. Overall, the authors found that the relationship between sadism and trolling was the strongest, and that indeed, sadists appear to troll because they find it pleasurable. “Both trolls and sadists feel sadistic glee at the distress of others,” they wrote. “Sadists just want to have fun ... and the Internet is their playground!”

The study comes as websites, particularly at major media outlets, are increasingly weighing steps to rein in trollish behavior. Last year Popular Science did away with its comments sections completely, citing research on the deleterious effects of trolling, and YouTube also took measures to rein in trolling.

But study author Buckels actually isn’t sure that fix is a realistic one. “Because the behaviors are intrinsically motivating for sadists, comment moderators will likely have a difficult time curbing trolling with punishments (e.g., banning users),” she said by email. “Ultimately, the allure of trolling may be too strong for sadists, who presumably have limited opportunities to express their sadistic interests in a socially-desirable manner.”

Original post on Hacker News

Bitcore

Comments: " Bitcore "

Original post on Hacker News

Why Teachers Won't Be Replaced By Software

Comments: "Why Teachers Won't Be Replaced By Software"

URL: http://blog.trinket.io/teachers-wont-be-replaced/


Marc Andreesen believes that software is eating the world. It’s a very visceral image, and in one sense it’s absolutely true. Software is spreading into every industry, changing how established players must play and even what the rules of the game are. But while many in Silicon Valley and Educational Technology think that software will “eat” teachers, replacing many of them, at trinket we believe software’s role is to create openness, making teachers better and more connected. Far from there being less teachers in the future, we think openness will enable and encourage more people than ever to teach.

Godawful Teachers?

In the midst of a longer Twitter conversation I was having with him and others (which I will likely blog about separately), Andreesen made an interesting comment:

@hauspoor @bfuller181 I think a lot of people don’t understand how godawful many teachers are partic in poor areas. It’s a big problem. — Marc Andreessen (@pmarca) February 3, 2014

My suggestion was that increasing openness into what teachers are doing and what the results were was the solution to bad teaching. Sunlight, disinfectant, etc:

@hauspoor @bfuller181 Also, ruthlessly firing a ton of godawful teachers and replacing with software. — Marc Andreessen (@pmarca) February 3, 2014

 

Andreesen agreed but thinks there needs to be some sort of culling of the worst teachers. He thinks of education as a government monopoly that has been too long shielded from adaptive pressures. So, logically, he thinks that it’s a natural thing for Software to eat.

Software as Archetype instead of Omnivore

But, backing up, it seems that Andreesen’s assertion that software could replace the worst performing teachers isn’t the only possibility we should consider. Another possibility is suggested by the trajectory of the profession of programming itself. In this view, software won’t replace teaching so much as model its future as an occupation.

There was a time when programmers were regarded as mostly “godawful”, insulated from competence by structure and size. Those of us who have had to endure the dictates and systematic negligence of large IT departments can see where the term ‘Godawful’ might apply.

Yet software has gotten better in the past two decades. Why? How? Can we replicate this success for teaching?

Openness and Teaching’s Future

We can anticipate what’s happening with teaching by looking at how the software industry matured: it became friendler, more open, and more accessible. It did this despite more junior, inexperienced programmers flooding the job market. And, importantly, without someone having to fire the “bottom 10%” of programmers. By connecting people (rather than separating them), transparency gave a better account of who was ‘good’, helped to improve the skills of those who weren’t and has led to the craft of coding to flourish. The craft of teaching is beginning to undergo the same overhaul. In 140 characters, that is:

@pmarca I think this approach is inconsistent. We didn't have to fire bad coders to get good software. We needed openness into what they do — Elliott Hauser (@hauspoor) February 3, 2014

Coursera, Udacity and other massive platforms are delivering content to students but they’re also opening up these instructors’ methods and content to other instructors, for critique, reuse, and inspiration. The Open Courseware Initiative, spearheaded by MIT’s forward-thinking leadership, has made a default of openness a reality for a growing number of universities. And, all along, the humble course page has remained the most prevalent form of open teaching.

Professors have been sharing course materials online for almost as long as the Web has been around, often via hand-written HTML. Inspired and encouraged by this, we’re building the easiest way to make an interactive course page to support classroom teaching. While the purpose of most online course materials is to let students access them, we’re also building direct support for instructor-to-instructor interaction around materials. This is harder than it looks, but we think we’ve cracked the code. More in a future blog post. For now, let’s sum this up.

Why Andreesen and other VCs are Wrong about Software Eating Teaching

I’ll admit it’s somewhat unfair to write a blog post around a few tweets, inferring deeper thoughts on complex issues from 140 character snippets. So I may very well have misrepresented Andreesen’s thoughts, though I’m confident that I’m pretty close to the mark. Like I said before, though, I don’t think that Andreesen or most venture capitalists have malevolent intentions. Far from that: they’re seeking business opportunities that do real good for the world. In that way we’re on the same mission.

But I think they’ve made an error in logic when they assume that teachers will be replaced by technology. Encouraged by software’s staggering proliferation into every corner of the modern economy, they’ve been blinded to the parallels between the professions of programming and of teaching. If software ‘eats’ teaching, it will look like how software has ‘eaten’ itself: more tools to make humans more productive, effective, and connected. We will not see the rise and triumph of Teaching Machines that replace teachers any more than we’ve seen Coding Machines make coders obsolete. Rather, the need for teachers will increase apace of human innovation more broadly, and the most innovative companies in the ed tech space will augment, connect, and amplify these professionals.

In our industry, companies like Bloc, General Assembly, and DevBootCamp understand that the human element is central to teaching and are, similarly, building technology that augments good teachers rather than seeking to replace them. That’s also the approach we’re taking here at trinket.

We don’t know what the future of education looks like but, if we’re reading these trends right they point to more openness, more teachers, and software firmly ensconced as a tool for open teaching.

Thanks to Dave Paola from Bloc.io and Brian, Ben, Julia, and Pardees from the trinket team for reviewing earlier versions of this post. And, of course, to Marc Andreesen for helping spark the discussion on Twitter.
 

Original post on Hacker News

Throwing in the towel on becomming a programmer

Comments: "Throwing in the towel on becomming a programmer"

URL: http://waterstreetgm.org/throwing-in-the-towel-on-becomming-a-programmer/


I think ready to hang up my programmer skates. In fact, it seems more likely that I never had skates to begin with. In the past 5-10 years, I’ve attempted to learn to code in virtually all of the major web languages and environments, using all of the latest tools and and classes and tutorials—and I’ve failed miserably every single time.

I just stumbled across Dawn Casey’s omg! I’m a n00b and too afraid to start. It is unbelievably good. Go read it, I’ll wait. Some parts are sad, some parts are laugh-out-loud funny, and I’m sure there are some parts in the middle there that are encouraging to beginners, but I can see right through the whole thing.  The whole ebb and flow from “holy man, I’m completely lost” to “humm…I think I’m starting to finally get this!” is something I know very, very well. I’ve felt this addictive, though ultimately disappointing feeling many times.

We’ll have some fun reliving the agony in a moment, but first a word of background for the folks at home: although I can’t program I can definitely code. The distinction needs to be made there. I’ve been coding for years and have actually gotten pretty good at it. WordPress is my main tool of choice and I’ve gotten quite handy with it — I maintain my own starter theme (a fork / amalgamation of several projects),  I write all of my projects from scratch, I write (modify) all kinds of custom functions to make different parts work, and I generally know the ins and outs of building reasonably complex sites with WordPress. On top of that, I’m extremely comfortable in the command line and I use Git for almost everything I work on. All that is to say that  A) I’m not a beginner, and B) I’m not non-technical. But, as we’ll now learn, I’m absolutely not a programmer.

A Sense of an Endpoint or, The Trouble with Programming

Over the years I have tried to learn all the big players: PHP, Javascript, Ruby, Python, Perl, Java and Objective C. I have failed to learn all of these. It’s almost staggering to even write/realize this. Seven languages. Seven! And I completely and utterly failed at learning all of them. What’s the issue then? After all these years, I think I’ve finally come up with the answer: although the road to starting to learn all of these languages is manageable, they all have a brick wall at the end. Let’s look at the four I spent the most time with:

PHP

PHP is my “best” language, though that’s a dubious honour. It’s the one I’ve been playing with the longest, and the one I’m most comfortable in, given all my time spent with WordPress. PHP is (should be?) a great language to learn with because all of the environment stuff is taken care of for you—just download MAMP, stick some PHP tags into a document and off you go. This is the language I’ve definitely spent the most time with — I’ve done several courses, read several thick books, read literally zillions of tutorials. I’ve gone through lengthy tutorials where I create an object that has a PDO or something to access my fake eCommerce store in my fake database. Things have actually gone fairly well a few times with PHP, in that I’ve gotten fairly far along with the material, but it never lasts long. Pretty soon it’s midnight on a Tuesday and I’m trying to access a query string that was sent via $_POST and I get thinking, “You know? Life is waaaaaay too short for this”.

I’m sorry…..what?

So, while starting with PHP is great, going beyond the basics has felt like solving a Rubik’s Cube with my toes. In the end, every single time, I’ve decided that there’s no way I’d ever want to build anything with PHP that I couldn’t already do much faster/easier/better with WordPress. And hence, I’ve given up.

Javascript

This is the real fun one! I’ve spent almost as much time with Javascript as I have with PHP. I started with jQuery (which I can use reasonably capably) and eventually worked backwards into plain Javascript. It’s actually a lot of fun, at first. The thing with Javascript is you have control over when things happen. This control is given to you by funny things called “callbacks”. Essentially, you use a function to call (or “callback”) another function. Let’s say for some reason you wanted to hide every image in on page for 10 seconds on page load. All you need to do is create a timer function that counts for 10 seconds and then call the image loading function as a callback to that. See? Fun!

The not fun part about Javascript is that brick wall I was talking about earlier. After spending a lot of time going through tutorials and courses and reading books and building little projects, I really felt like I was ready to take on the Javascript world. After you’ve got the basics down pat, the logical next step is browsing through the 500 TodoJS apps and spending a month trying to decide which MVC framework will suit you best. After you decide, it will take you another month to try and figure out what an MVC framework even does. I still don’t really know.

So, when people tell you that Javascript is the wave of the future, this is what they’re talking about. If you’re feeling pretty good about all the JS you know, just have a look at this:

I’m sorry, ({ what })?

At first I thought, “Hummm, I’m really catching onto this Javascript stuff!” Everything’s an object, callbacks, hell, I even understood what the module pattern was and why it made sense to use it (to avoid this “spaghetti” business people talk about)! But turning all that knowledge into a working Backbone app? That felt like swimming in cement. The funny thing about Javascript is that I still can’t see a way for me bridge those two worlds. I simply can’t see how I can take all these fundamentals that I know about Javascript now, and scaffold it up enough so that something like Backbone even makes sense to me. Despite hours and hours and hours of work, getting to that “next level” with Javascript feels literally impossible.

Python

I don’t have a lot to say about Python, except that I know the promise from xkcd is an empty one. I tried it a few times, and worked through the course at Codecademy, but it never felt very natural. I didn’t spend too long with it, but it never really clicked. On top of that, there’s a constant din of “Python’s not for you, it’s for them ({scientists, academics, hackers, statisticians, someone else})” out there if you look up stuff about Python. The language has always been really appealing for me, but for better or worse it’s never felt like something I should invest my time in. That, combined with the fact that I discovered Ruby meant the end for Python.

Ruby

If you’ve never touched a single language, this is the one for you. It really is beautiful, like so many say. It’s short, concise, fun, productive, and a lot of it really, truly reads like English. Of all the languages, Ruby was by far the most natural and fun. In the months I spent learning Ruby the hard way, I’d run home from work to get back to it. I really loved working on new things in Ruby — they all just made sense so quickly. By the end, I was feeling so happy and comfortable with Ruby that I even tackled some problems in Project Euler with it.

But as I look back at it now, I could have easily used PHP or Javascript for those same problems. I still like the syntax of Ruby better than all the others, but I wasn’t doing anything with Ruby that I couldn’t do with PHP or JS. I’d write a clunky function that did something with x and y and returned some value at the end. Doing it with Ruby was fun, but I wasn’t doing anything beyond playing with the primitives. Well, what’s beyond the privatives you ask? In short: Rails. Just like all the others, Ruby has a brick wall as well. It’s called Rails. The only thing is, Rails isn’t just a brick wall, it’s a brick mountain.

A few years back, I spent about a month getting comfortable with Ruby. It was actually really nice, and as I’ve said, fun. Actual, legitimate fun. After I was happy with where I was with Ruby the language, I started in ernest with Michael Hartl’s famous Rails Tutorial. I probably lasted another 6 weeks after that, but I knew pretty early on that it really wasn’t going to happen for me. In the tutorial, Hartl introduces Ruby, Rails, Git, Heroku, Test Driven Development and just about everything else you can think of, right from the start. By the end of the first month, I had literally no idea what was going on. I’d rake something and then route something else, and then I’d try and migrate up to (or down from) somewhere and absolutely none of it made any sense. By quittin’ time (around week 6) I was 100% convinced that anything I’d end up building with Rails could be build in a fraction of the time in WordPress.

Sure, but what about Sinatra? I actually did a few projects with Sinatra as well, and really liked using it. It felt pretty fun too. Way less behind the scenes magic, and you could actually see what was going on. But what was the point? Rails is still the big endpoint, and playing around with Sinatra really doesn’t get you too far down the road to learning Rails.

A light at the beginning of the tunnel

The whole business of programming is extremely complex. There are so many moving parts to anything these days, and one could easily spend a year just learning the tools available for a given language/environment. Javascript is a perfect example of this—it’s completely and utterly obsessed with tools. Not that these tools aren’t useful, there’s just a million of them.

I’ll end with two excerpts. First, a quote from the omg I’m a noob piece that I mentioned at the top, that sums all of this up nicely for me:

Second, probably the true impetus for writing this post, is a few lines from David DeSandro‘s ImagesLoaded Javascript plugin. While en route down a winding rabbit hole the other day, I stumbled across this plugin and took a second to look through the code to see if any of it made sense to me. By about line 20, I was actually laughing out loud. I have no clue whatsoever what the code is doing, despite all of my courses and books and hours spent at the screen. Here’s the selection that made me spit coffee onto my keyboard:

Really though, just what is happening here?

It’s not that I’m done trying to learn any of this, or certainly not that I don’t find any of it enjoyable. I suspect that in years to come, I’ll go pick up a fresh copy of Ruby again and spend a month or two using it to figure out some Project Euler problems, but I think that my ambitions to become a programmer have finally extinguished. And I think I’m actually relieved by this.

Here’s to spending more time outside!

Original post on Hacker News

Instapainting Turns Your Photos Into Hand-Painted Oil Paintings On The Cheap | TechCrunch

Comments: "Instapainting Turns Your Photos Into Hand-Painted Oil Paintings On The Cheap | TechCrunch"

URL: http://techcrunch.com/2014/02/14/instapainting-turns-your-photos-into-hand-painted-oil-paintings-on-the-cheap/


Surprise! It’s Valentine’s Day, the stealthiest of all the holidays. Sneaks up on you, doesn’t it?

If you’re trying to get a gift today, you… might be a bit short on options. Will you go with the gas station teddy bear? The twice-crushed box of chocolates? A bouquet of acceptable-looking roses for $200?

If your nearly-forgotten flame would be content with the promise of a pretty cool gift in a few weeks, though, you might be set. Instapainting, a YC-backed company launching this morning, turns any photo into a hand-painted piece on canvas for under $100 bucks.

If you’ve ever tried to have something like this done before, you probably know: this exists. A few companies have been doing the whole photo-into-art thing for years. Where Instapainting thinks they have them beat, however, is in pricing and speed.

Instapainting’s smallest option (a 12″x12″ canvas) starts the pricing at $53 (including shipping), with the largest option (29.33″x22″) going for $130 . A quick search turns up a number of others in this space — OilPaintingExpress, OilPaintings.com, and myDavinci to name a few. The next wallet-friendliest option I could find was OilPaintingExpress, where a 12″x12″ work starts at $119. Most of them start the pricing at $200-$300 dollars.

Instapainting’s website is also a bit more… modern, for lack of a better word. Setting up your order takes all of a few seconds; upload your photo, crop it to the region you like, pick a canvas size, and you’re set. Built on top of tools like FilePicker and Stripe, the whole ordering flow is slick and simple.

So how do they keep the prices down? A few ways:

  • Your original photo is printed onto canvas first, and this printed piece is used as the base/foundation of the hand painted piece. In other words: they’re painting on top of the photo. The artist still has to know how to properly mix colors and how to recreate lights/shadows/etc. in oil, but it’s a whole lot quicker than starting on blank canvas. Many a professional artist might balk at the idea — but unless your friends start scratching at the paint to see what’s underneath, they probably won’t be able to tell.
  • As you might’ve guessed, much of the work is done overseas. Instapainting’s founders source their painters (primarily in China) one-by-one, mostly through their myriad online profiles. After quietly starting to roll the service out around a month ago, Instapainting says they have just shy of 100 painters producing pieces.
  • They ship your art rolled in a tube, leaving it to the customer to frame it or stretch it onto canvas. The company tells me they’re working on a quick-assembling canvas frame that they can pack into the shipping tubes, but that’s still a few months out.

But what about shoddy work? Cheaper rarely means better, after all. To keep quality up, Instapainting puts two layers of protection into the mix: first, each painting is checked by a second set of eyes before it heads out to the customer. Second, they guarantee their work; if you don’t dig the oil-painted version they send you, they’ll remake it or give you a full refund.

Meanwhile, the company is also dabbling with the idea of being a marketplace for artists looking to have their work recreated by hand. Artists upload the digital version of their painting or photograph, and Instapainting recreates their work and shares the revenue. It’s not quite the same as buying an original piece by the original artist, of course — but when your main concern is how it looks hanging above your couch, it’s a nice alternative to buying a standard print.

We’re planning on putting the just-launched service through the proper paces, so be on the lookout for a full review in the coming weeks.

Original post on Hacker News

Tesla electric car catches fire in Toronto; company at a loss to explain - The Globe and Mail

Comments: "Tesla electric car catches fire in Toronto; company at a loss to explain"

URL: http://www.theglobeandmail.com/report-on-business/tesla-says-cause-of-toronto-garage-fire-not-yet-determined/article16898563/


Electric car maker Tesla Motors Inc. said it has not yet determined how a Model S sedan parked in its owner’s garage in Toronto caught fire earlier this month.

The fire comes a month after Tesla revamped the software and the wall adapters used to charge the batteries in its cars, following a November garage fire involving a Model S in Irvine, California. The Model S involved in the Toronto fire was not being charged, according to a media report.

More Related to this Story

Tesla said it has “definitively determined” that the Toronto fire did not originate in the battery, the charging system, the adapter or the electrical receptacle, noting that these components were untouched by the fire.

“In this particular case, we don’t yet know the precise cause,” Tesla said in a statement.

The company would not provide further details about the incident.

The Business Insider blog, which first reported the Toronto fire on Thursday, said the Model S caught fire after the owner came home from a drive. The four-month-old car was not plugged into an electric socket, Business Insider said, citing an anonymous source.

Model S cars, which sell for roughly $70,000 to $90,000, are powered by lithium-ion batteries that are charged by plugging the car into an electrical outlet.

Seven Tesla employees visited the Toronto owner of the vehicle that caught fire, and the company offered to take care of the damages caused by the fire, according to the Business Insider report.

The U.S. National Highway Traffic Safety Administration said it was aware of the fire in Canada.

“Since the incident occurred outside the territorial boundaries of the United States, the agency will be in contact with the manufacturer and others to gather the facts and take whatever action is warranted by the circumstances,” the NHTSA said in a statement.

Three road fires last fall in Model S sedans, including two in the United States and one in Mexico, caused Tesla’s stock to drop sharply in October, although the stock’s price since then has risen to just above $200.

The U.S. National Highway Traffic Safety Administration is investigating the two U.S. fires.

Original post on Hacker News

MakeGamesWith.Us

Comments: "MakeGamesWith.Us"

URL: https://www.makegameswith.us/build-your-valentine-a-game-in-your-browser/?


Take a quick tour around the interface
just so you don't get lost.

Continue!

You have finished your game! You should share it on Facebook or copy the link to your valentine!

Share on Facebook! I'm Done!

It looks like you have an error. Hover over the red X next to the line of code to see the details.

Got It!

We ran into a problem trying to run your game. Please try again.

Got It!

Our servers are overloaded so your games may take some time to load. Please be patient or try again later!

Got It!

One of our partners is experiencing issues at the moment. If you have issues running your game, please try again soon!

Got It!

One of our partners is experiencing issues at the moment. If you run into an issue with your simulator, please keep clicking "Tap to Play" until it succeeds.

Got It!

Would you like to continue where you left off, or start over? Keep in mind if you start over you will lose all of your progress!

Start Over Continue

Original post on Hacker News

Comments: "" URL: http://www.darpa.mil/opencatalog/

Comments: ""

URL: http://www.darpa.mil/opencatalog/


Aptima Inc. Network Query by Example Analytics 2014-07 https://github.com/Aptima/pattern-matching.git stats Hadoop MapReduce-over-Hive based implementation of network query by example utilizing attributed network pattern matching. ALv2 Boeing/Pitt
Publications SMILE-WIDE: A scalable Bayesian network library Analytics 2014-07 https://github.com/SmileWide/main.git stats SMILE-WIDE is a scalable Bayesian network library. Initially, it is a version of the SMILE library, as in SMILE With Integrated Distributed Execution. The general approach has been to provide an API similar to the existing API SMILE developers use to build "local," single-threaded applications. However, we provide "vectorized" operations that hide a Hadoop-distributed implementation. Apart from invoking a few idioms like generic Hadoop command line argument parsing, these appear to the developer as if they were executed locally. ALv2 Carnegie Mellon University
Publications Support Distribution Machines Analytics 2014-07 https://github.com/dougalsutherland/py-sdm.git stats Python implementation of the nonparametric divergence estimators described by Barnabas Poczos, Liang Xiong, Jeff Schneider (2011). Nonparametric divergence estimation with applications to machine learning on distributions. Uncertainty in Artificial Intelligence. ( http://autonlab.org/autonweb/20287.html ) and also their use in support vector machines, as described by Dougal J. Sutherland, Liang Xiong, Barnabas Poczos, Jeff Schneider (2012). Kernels on Sample Sets via Nonparametric Divergence Estimates. ( http://arxiv.org/abs/1202.0302 ). BSD Continuum Analytics Blaze Infrastructure 2014-07 https://github.com/ContinuumIO/blaze.git stats Blaze is the next-generation of NumPy. It is designed as a foundational set of abstractions on which to build out-of-core and distributed algorithms over a wide variety of data sources and to extend the structure of NumPy itself. Blaze allows easy composition of low level computation kernels (C, Fortran, Numba) to form complex data transformations on large datasets. In Blaze, computations are described in a high-level language (Python) but executed on a low-level runtime (outside of Python), enabling the easy mapping of high-level expertise to data without sacrificing low-level performance. Blaze aims to bring Python and NumPy into the massively-multicore arena, allowing it to leverage many CPU and GPU cores across computers, virtual machines and cloud services. BSD Continuum Analytics Numba Infrastructure 2014-07 https://github.com/numba/numba.git stats Numba is an Open Source NumPy-aware optimizing compiler for Python sponsored by Continuum Analytics, Inc. It uses the LLVM compiler infrastructure to compile Python syntax to machine code.

It is aware of NumPy arrays as typed memory regions and so can speed-up code using NumPy arrays. Other, less well-typed code is translated to Python C-API calls effectively removing the "interpreter" but not removing the dynamic indirection.

Numba is also not a tracing just in time (JIT) compiler. It compiles your code before it runs either using run-time type information or type information you provide in the decorator.

Numba is a mechanism for producing machine code from Python syntax and typed data structures such as those that exist in NumPy.

BSD Continuum Analytics Bokeh Visualization 2014-07 https://github.com/ContinuumIO/bokeh.git stats Bokeh (pronounced bo-Kay or bo-Kuh) is a Python interactive visualization library for large datasets that natively uses the latest web technologies. Its goal is to provide elegant, concise construction of novel graphics in the style of Protovis/D3, while delivering high-performance interactivity over large data to thin clients. BSD Continuum Analytics and Indiana University
Publications Abstract Rendering Visualization 2014-07 https://github.com/JosephCottam/AbstractRendering.git stats Information visualization rests on the idea that a meaningful relationship can be drawn between pixels and data. This is most often mediated by geometric entities (such as circles, squares and text) but always involves pixels eventually to display. In most systems, the pixels are tucked away under levels of abstraction in the rendering system. Abstract Rendering takes the opposite approach: expose the pixels and gain powerful pixel-level control. This pixel-level power is a complement to many existing visualization techniques. It is an elaboration on rendering, not an analytic or projection step, so it can be used as an epilogue to many existing techniques. In standard rendering, geometric objects are projected to an image and represented on that image's discrete pixels. The source space is an abstract canvas that contains logically continuous geometric primitives and the target space is an image that contains discrete colors. Abstract Rendering fits between these two states. It introduces a discretization of the data at the pixel-level, but not necessarily all the way to colors. This enables many pixel-level concerns to be efficiently and concisely captured. BSD Continuum Analytics CDX Visualization 2014-07 https://github.com/ContinuumIO/cdx.git stats Software to visualize the structure of large or complex datasets / produce guides that help users or algorithms gauge the quality of various kinds of graphs & plots. BSD Continuum Analytics and Indiana University
Publications Stencil Visualization 2014-07 https://github.com/JosephCottam/Stencil.git stats Stencil is a grammar-based approach to visualization specification at a higher-level. BSD Data Tactics Corporation Vowpal Wabbit Analytics 2014-07 https://github.com/JohnLangford/vowpal_wabbit.git stats The Vowpal Wabbit (VW) project is a fast out-of-core learning system sponsored by Microsoft Research and (previously) Yahoo! Research. Support is available through the mailing list. There are two ways to have a fast learning algorithm: (a) start with a slow algorithm and speed it up, or (b) build an intrinsically fast learning algorithm. This project is about approach (b), and it's reached a state where it may be useful to others as a platform for research and experimentation. There are several optimization algorithms available with the baseline being sparse gradient descent (GD) on a loss function (several are available). The code should be easily usable. Its only external dependence is on the boost library, which is often installed by default. BSD Data Tactics Corporation Circuit Infrastructure 2014-07 https://code.google.com/p/gocircuit/source/checkout Go Circuit reduces the human development and sustenance costs of complex massively-scaled systems nearly to the level of their single-process counterparts. It is a combination of proven ideas from the Erlang ecosystem of distributed embedded devices and Go's ecosystem of Internet application development. Go Circuit extends the reach of Go's linguistic environment to multi-host/multi-process applications. ALv2 Georgia Tech / GTRI
Publications libNMF: a high-performance library for nonnegative matrix factorization and hierarchical clustering Analytics 2014-07 Pending LibNMF is a high-performance, parallel library for nonnegative matrix factorization on both dense and sparse matrices written in C++. Implementations of several different NMF algorithms are provided, including multiplicative updating, hierarchical alternating least squares, nonnegative least squares with block principal pivoting, and a new rank2 algorithm. The library provides an implementation of hierarchical clustering based on the rank2 NMF algorithm. ALv2 IBM Research
Publications SKYLARK: Randomized Numerical Linear Algebra and ML Analytics 2014-07 2014-05-15 SKYLARK implements Numerical Linear Algebra (NLA) kernels based on sketching for distributed computing platforms. Sketching reduces dimensionality through randomization, and includes Johnson-Lindenstrauss random projection (JL); a faster version of JL based on fast transform techniques; sparse techniques that can be applied in time proportional to the number of nonzero matrix entries; and methods for approximating kernel functions and Gram matrices arising in nonlinear statistical modeling problems. We have a library of such sketching techniques, built using MPI in C++ and callable from Python, and are applying the library to regression, low-rank approximation, and kernel-based machine learning tasks, among other problems. ALv2 Institute for Creative Technologies / USC Immersive Body-Based Interactions Visualization 2014-07 http://code.google.com/p/svnmimir/source/checkout stats Provides innovative interaction techniques to address human-computer interaction challenges posed by Big Data. Examples include:
* Wiggle Interaction Technique: user induced motion to speed visual search.
* Immersive Tablet Based Viewers: low cost 3D virtual reality fly-through's of data sets.
* Multi-touch interfaces: browsing/querying multi-attribute and geospatial data, hosted by SOLR.
* Tablet based visualization controller: eye-free rapid interaction with visualizations. ALv2 Johns Hopkins University
Publications igraph Analytics 2014-07 https://github.com/igraph/xdata-igraph.git stats igraph provides a fast generation of large graphs, fast approximate computation of local graph invariants, fast parallelizable graph embedding. API and Web-service for batch processing graphs across formats. GPLv2 Trifacta (Stanford, University of Washington, Kitware, Inc. Team) Vega Visualization 2014-07 https://github.com/trifacta/vega.git stats Vega is a visualization grammar, a declarative format for creating and saving visualization designs. With Vega you can describe data visualizations in a JSON format, and generate interactive views using either HTML5 Canvas or SVG. BSD Kitware, Inc. Tangelo Visualization 2014-07 https://github.com/Kitware/tangelo.git stats Tangelo provides a flexible HTML5 web server architecture that cleanly separates your web applications (pure Javascript, HTML, and CSS) and web services (pure Python). This software is bundled with some great tools to get you started. ALv2 Harvard and Kitware, Inc.
Publications LineUp Visualization 2014-07 https://github.com/Caleydo/org.caleydo.vis.lineup.demos.git stats LineUp is a novel and scalable visualization technique that uses bar charts. This interactive technique supports the ranking of items based on multiple heterogeneous attributes with different scales and semantics. It enables users to interactively combine attributes and flexibly refine parameters to explore the effect of changes in the attribute combination. This process can be employed to derive actionable insights as to which attributes of an item need to be modified in order for its rank to change. Additionally, through integration of slope graphs, LineUp can also be used to compare multiple alternative rankings on the same set of items, for example, over time or across different attribute combinations. We evaluate the effectiveness of the proposed multi-attribute visualization technique in a qualitative study. The study shows that users are able to successfully solve complex ranking tasks in a short period of time. BSD Harvard and Kitware, Inc.
Publications LineUp Web Visualization 2014-07 2014-06 LineUpWeb is the web version of the novel and scalable visualization technique. This interactive technique supports the ranking of items based on multiple heterogeneous attributes with different scales and semantics. It enables users to interactively combine attributes and flexibly refine parameters to explore the effect of changes in the attribute combination. BSD Stanford, University of Washington, Kitware, Inc. Lyra Visualization 2014-07 2014-02 Lyra is an interactive environment that makes custom visualization design accessible to a broader audience. With Lyra, designers map data to the properties of graphical marks to author expressive visualization designs without writing code. Marks can be moved, rotated and resized using handles; relatively positioned using connectors; and parameterized by data fields using property drop zones. Lyra also provides a data pipeline interface for iterative, visual specification of data transformations and layout algorithms. Visualizations created with Lyra are represented as specifications in Vega, a declarative visualization grammar that enables sharing and reuse. BSD Phronesis stat_agg Analytics 2014-07 https://github.com/kaneplusplus/stat_agg.git stats stat_agg is a Python package that provides statistical aggregators that maximize ensemble prediction accuracy by weighting individual learners in an optimal way. When used with the laputa package, learners may be distributed across a cluster of machines. The package also provides fault-tolerance when one or more learners becomes unavailable. ALv2 Phronesis flexmem Infrastructure 2014-07 https://github.com/kaneplusplus/flexmem.git stats Flexmem is a general, transparent tool for out-of-core (OOC) computing in the R programming environment. It is launched as a command line utility, taking an application as an argument. All memory allocations larger than a specified threshold are memory-mapped to a binary file. When data are not needed, they are stored on disk. It is both process- and thread-safe. ALv2 Phronesis laputa Infrastructure 2014-07 https://github.com/kaneplusplus/laputa.git stats Laputa is a Python package that provides an elastic, parallel computing foundation for the stat_agg (statistical aggregates) package. ALv2 Phronesis bigmemory Infrastructure 2014-07 http://cran.r-project.org/web/packages/bigmemory/index.html Bigmemory is an R package to create, store, access, and manipulate massive matrices. Matrices are allocated to shared memory and may use memory-mapped files. Packages biganalytics, bigtabulate, synchronicity, and bigalgebra provide advanced functionality. ALv2 Phronesis bigalgebra Infrastructure 2014-07 https://r-forge.r-project.org/scm/viewvc.php/?root=bigmemory Bigalgebra is an R package that provides arithmetic functions for R matrix and big.matrix objects. ALv2 MDA Information Systems, Inc., Jet Propulsion Laboratory, USC/Information Sciences Institute OODT Infrastructure 2014-07 https://svn.apache.org/repos/asf/oodt/ stats APACHE OODT enables transparent access to distributed resources, data discovery and query optimization, and distributed processing and virtual archives. OODT provides software architecture that enables models for information representation, solutions to knowledge capture problems, unification of technology, data, and metadata. ALv2 MDA Information Systems, Inc.,Jet Propulsion Laboratory, USC/Information Sciences Institute Wings Infrastructure 2014-07 https://github.com/varunratnakar/wings.git stats WINGS provides a semantic workflow system that assists scientists with the design of computational experiments. A unique feature of WINGS is that its workflow representations incorporate semantic constraints about datasets and workflow components, and are used to create and validate workflows and to generate metadata for new data products. WINGS submits workflows to execution frameworks such as Pegasus and OODT to run workflows at large scale in distributed resources. ALv2 MIT-LL
Publications Query By Example (Graph QuBE) Analytics 2014-07 2014-02-15 Query-by-Example (Graph QuBE) on dynamic transaction graphs. ALv2 MIT-LL
Publications Julia Analytics 2014-07 https://github.com/JuliaLang/julia.git stats Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. MIT,GPL,LGPL,BSD MIT-LL
Publications Topic Analytics 2014-07 Pending Probabilistic Latent Semantic Analysis (pLSA) Topic Modeling. ALv2 MIT-LL
Publications SciDB Infrastructure 2014-07 https://github.com/wujiang/SciDB-mirror.git stats Scientific Database for large-scale numerical data. GPLv3 MIT-LL
Publications Information Extractor Analytics 2014-07 Pending Trainable named entity extractor (NER) and relation extractor. ALv2 Next Century Corporation Ozone Widget Framework Visualization 2014-07 https://github.com/ozoneplatform/owf.git stats Ozone Widget Framework provides a customizable open-source web application that assembles the tools you need to accomplish any task and enables those tools to communicate with each other. It is a technology-agnostic composition framework for data and visualizations in a common browser-based display and interaction environment that lowers the barrier to entry for the development of big data visualizations and enables efficient exploration of large data sets. ALv2 Next Century Corporation Neon Visualization Environment Visualization 2014-07 https://github.com/NextCenturyCorporation/neon.git stats Neon is a framework that gives a datastore agnostic way for visualizations to query data and perform simple operations on that data such as filtering, aggregation, and transforms. It is divided into two parts, neon-server and neon-client. Neon-server provides a set of RESTful web services to select a datastore and perform queries and other operations on the data. Neon-client is a javascript API that provides a way to easily integrate neon-server capabilities into a visualization, and also aids in 'widgetizing' a visualization, allowing it to be integrated into a common OWF based ecosystem. ALv2 Oculus Info Inc.
Publications ApertureJS Visualization 2014-07 https://github.com/oculusinfo/aperturejs.git stats ApertureJS is an open, adaptable and extensible JavaScript visualization framework with supporting REST services, designed to produce visualizations for analysts and decision makers in any common web browser. Aperture utilizes a novel layer based approach to visualization assembly, and a data mapping API that simplifies the process of adaptable transformation of data and analytic results into visual forms and properties. Aperture vizlets can be easily embedded with full interoperability in frameworks such as the Ozone Widget Framework (OWF). MIT Oculus Info Inc.
Publications Influent Visualization 2014-07 https://github.com/oculusinfo/influent.git stats Influent is an HTML5 tool for visually and interactively following transaction flow, rapidly revealing actors and behaviors of potential concern that might otherwise go unnoticed. Summary visualization of transactional patterns and actor characteristics, interactive link expansion and dynamic entity clustering enable Influent to operate effectively at scale with big data sources in any modern web browser. Influent has been used to explore data sets with millions of entities and hundreds of millions of transactions. MIT Oculus Info Inc.
Publications Aperture Tile-Based Visual Analytics Visualization 2014-07 https://github.com/oculusinfo/aperture-tiles.git stats New tools for raw data characterization of 'big data' are required to suggest initial hypotheses for testing. The widespread use and adoption of web-based maps has provided a familiar set of interactions for exploring abstract large data spaces. Building on these techniques, we developed tile based visual analytics that provide browser-based interactive visualization of billions of data points. MIT Oculus Info Inc.
Publications Oculus Ensemble Clustering Analytics 2014-07 https://github.com/oculusinfo/ensemble-clustering.git stats Oculus Ensemble Clustering is a flexible multi-threaded clustering library for rapidly constructing tailored clustering solutions that leverage the different semantic aspects of heterogeneous data. The library can be used on a single machine using multi-threading or distributed computing using Spark. MIT Raytheon BBN Content and Context-based Graph Analysis: PINT, Patterns in Near-Real Time Analytics 2014-07 https://github.com/plamenbbn/XDATA.git stats Patterns in Near-Real Time will take any corpus as input and quantify the strength of the query match to a SME-based process model, represent process model as a Directed Acyclic Graph (DAG), and then search and score potential matches. ALv2 Raytheon BBN Content and Context-based Graph Analysis: NILS, Network Inference of Link Strength Analytics 2014-07 https://github.com/plamenbbn/XDATA.git stats Network Inference of Link Strength will take any text corpus as input and quantify the strength of connections between any pair of entities. Link strength probabilities are computed via shortest path. ALv2 Royal Caliber
Publications GPU based Graphlab style Gather-Apply-Scatter (GAS) platform for quickly implementing and running graph algorithms Analytics 2014-07 https://github.com/RoyalCaliber/vertexAPI2.git stats Allows users to express graph algorithms as a series of Gather-Apply-Scatter (GAS) steps similar to GraphLab. Runs these vertex programs using a single or multiple GPUs - demonstrates a large speedup over GraphLab. ALv2 Scientific Systems Company, Inc., MIT, and University of Louisville BayesDB Analytics 2014-07 https://github.com/mit-probabilistic-computing-project/BayesDB.git stats BayesDB is an open-source implementation of a predictive database table. It provides predictive extensions to SQL that enable users to query the implications of their data --- predict missing entries, identify predictive relationships between columns, and examine synthetic populations --- based on a Bayesian machine learning system in the backend. ALv2 Scientific Systems Company, Inc., MIT, and University of Louisville Crosscat Analytics 2014-07 https://github.com/mit-probabilistic-computing-project/crosscat.git stats CrossCat is a domain-general, Bayesian method for analyzing high-dimensional data tables. CrossCat estimates the full joint distribution over the variables in the table from the data via approximate inference in a hierarchical, nonparametric Bayesian model, and provides efficient samplers for every conditional distribution. CrossCat combines strengths of nonparametric mixture modeling and Bayesian network structure learning: it can model any joint distribution given enough data by positing latent variables, but also discovers independencies between the observable variables. ALv2 Sotera Defense Solutions, Inc.
Publications Zephyr Infrastructure 2014-07 http://github.com/Sotera/zephyr stats Zephyr is a big data, platform agnostic ETL API, with Hadoop MapReduce, Storm, and other big data bindings. ALv2 Sotera Defense Solutions, Inc.
Publications Page Rank Analytics 2014-07 https://github.com/Sotera/page-rank.git stats Sotera Page Rank is a Giraph/Hadoop implementation of a distributed version of the Page Rank algorithm. ALv2 Sotera Defense Solutions, Inc.
Publications Louvain Modularity Analytics 2014-07 https://github.com/Sotera/distributed-louvain-modularity.git stats Giraph/Hadoop implementation of a distributed version of the Louvain community detection algorithm. ALv2 Sotera Defense Solutions, Inc.
Publications Spark MicroPath Analytics 2014-07 https://github.com/Sotera/aggregate-micro-paths.git The Spark implementation of the micropath analytic. ALv2 Sotera Defense Solutions, Inc.
Publications ARIMA Analytics 2014-07 https://github.com/Sotera/rhipe-arima stats Hive and RHIPE implementation of an ARIMA analytic. ALv2 Sotera Defense Solutions, Inc.
Publications Leaf Compression Analytics 2014-07 https://github.com/Sotera/leaf-compression.git stats Recursive algorithm to remove nodes from a network where degree centrality is 1. ALv2 Sotera Defense Solutions, Inc.
Publications Correlation Approximation Analytics 2014-07 https://github.com/Sotera/correlation-approximation stats Spark implementation of an algorithm to find highly correlated vectors using an approximation algorithm. ALv2 Stanford University - Boyd
Publications QCML (Quadratic Cone Modeling Language) Analytics 2014-07 https://github.com/cvxgrp/qcml.git stats Seamless transition from prototyping to code generation. Enable ease and expressiveness of convex optimization across scales with little change in code. ALv2 Stanford University - Boyd
Publications PDOS (Primal-dual operator splitting) Analytics 2014-07 https://github.com/cvxgrp/pdos.git stats Concise algorithm for solving convex problems; solves problems passed from QCML. ALv2 Stanford University - Boyd
Publications SCS (Self-dual Cone Solver) Analytics 2014-07 https://github.com/cvxgrp/scs.git stats Implementation of a solver for general cone programs, including linear, second-order, semidefinite and exponential cones, based on an operator splitting method applied to a self-dual homogeneous embedding. The method and software supports both direct factorization, with factorization caching, and an indirect method, that requires only the operator associated with the problem data and its adjoint. The implementation includes interfaces to CVX, CVXPY, matlab, as well as test routines. This code is described in detail in an associated paper, at http://www.stanford.edu/~boyd/papers/pdos.html (which also links to the code). ALv2 Stanford University - Boyd
Publications ECOS: An SOCP Solver for Embedded Systems Analytics 2014-07 https://github.com/ifa-ethz/ecos.git stats ECOS is a lightweight primal-dual homogeneous interior-point solver for SOCPs, for use in embedded systems as well as a base solver for use in large scale distributed solvers. It is described in the paper at http://www.stanford.edu/~boyd/papers/ecos.html. ALv2 Stanford University - Boyd
Publications Proximal Operators Analytics 2014-07 https://github.com/cvxgrp/proximal.git stats This library contains sample implementations of various proximal operators in Matlab. These implementations are intended to be pedagogical, not the most performant. This code is associated with the paper Proximal Algorithms by Neal Parikh and Stephen Boyd. ALv2 Stanford University - Hanrahan
Publications imMens Visualization 2014-07 https://github.com/StanfordHCI/imMens.git stats imMens is a web-based system for interactive visualization of large databases. imMens uses binned aggregation to produce summary visualizations that avoid the shortcomings of standard sampling-based approaches. Through data decomposition methods (to limit data transfer) and GPU computation via WebGL (for parallel query processing), imMens enables real-time (50fps) visual querying of billion+ element databases. BSD Stanford University - Hanrahan
Publications trelliscope Visualization 2014-07 https://github.com/hafen/trelliscope.git stats Trellis Display, developed in the 90s, also divides the data. A visualization method is applied to each subset and shown on one panel of a multi-panel trellis display. This framework is a very powerful mechanism for all data, large and small. Trelliscope, a layer that uses datadr, extends Trellis to large complex data. An interactive viewer is available for viewing subsets of very large displays, and the software provides the capability to sample subsets of panels from rigorous sampling plans. Sampling is often necessary because in most applications, there are too many subsets to look at them all. BSD Stanford University - Hanrahan
Publications RHIPE: R and Hadoop Integrated Programming Environment Infrastructure 2014-07 https://github.com/saptarshiguha/RHIPE.git stats In Divide and Recombine (D&R), big data are divided into subsets in one or more ways, forming divisions. Analytic methods, numeric-categorical methods of machine learning and statistics plus visualization methods, are applied to each of the subsets of a division. Then the subset outputs for each method are recombined. D&R methods of division and recombination seek to make the statistical accuracy of recombinations as large as possible, ideally close to that of the hypothetical direct, all-data application of the methods. The D&R computational environment starts with RHIPE, a merger of R and Hadoop. RHIPE allows an analyst to carry out D&R analysis of big data wholly from within R, and use any of the thousands of methods available in R. RHIPE communicates with Hadoop to carry out the big, parallel computations. ALv2 Stanford University - Hanrahan
Publications Riposte Analytics 2014-07 https://github.com/jtalbot/riposte.git stats Riposte is a fast interpreter and JIT for R. The Riposte VM has 2 cooperative subVMs for R scripting (like Java) and for R vector computation (like APL). Our scripting code has been 2-4x faster in Riposte than in R's recent bytecode interpreter. Vector-heavy code is 5-10x faster. Speeding up R can greatly increases the analyst's efficiency. BSD Stanford University - Olukotun
Publications Delite Infrastructure 2014-07 https://github.com/stanford-ppl/Delite.git stats Delite is a compiler framework and runtime for parallel embedded domain-specific languages (DSLs). BSD Stanford University - Olukotun
Publications SNAP Infrastructure 2014-07 https://github.com/snap-stanford/snap stats Stanford Network Analysis Platform (SNAP) is a general purpose network analysis and graph mining library. It is written in C++ and easily scales to massive networks with hundreds of millions of nodes, and billions of edges. It efficiently manipulates large graphs, calculates structural properties, generates regular and random graphs, and supports attributes on nodes and edges. BSD SYSTAP, LLC bigdata Infrastructure 2014-07 https://bigdata.svn.sourceforge.net/svnroot/bigdata/ stats Bigdata enables massively parallel graph processing on GPUs and many core CPUs. The approach is based on the decomposition of a graph algorithm as a vertex program. The initial implementation supports an API based on the GraphLab 2.1 Gather Apply Scatter (GAS) API. Execution is available on GPUs, Intel Xenon Phi (aka MIC), and multi-core GPUs. GPLv2 SYSTAP, LLC mpgraph Analytics 2014-07 http://svn.code.sf.net/p/mpgraph/code/ stats Mpgraph enables massively parallel graph processing on GPUs and many core CPUs. The approach is based on the decomposition of a graph algorithm as a vertex program. The initial implementation supports an API based on the GraphLab 2.1 Gather Apply Scatter (GAS) API. Execution is available on GPUs, Intel Xenon Phi (aka MIC), and multi-core GPUs. ALv2 UC Davis Gunrock Analytics 2014-07 https://github.com/gunrock/gunrock.git stats Gunrock is a CUDA library for graph primitives that refactors, integrates, and generalizes best-of-class GPU implementations of breadth-first search, connected components, and betweenness centrality into a unified code base useful for future development of high-performance GPU graph primitives. ALv2 Draper Laboratory
Publications Analytic Activity Logger Infrastructure 2014-07 https://github.com/draperlab/xdatalogger.git stats Analytic Activity Logger is an API that creates a common message passing interface to allow heterogeneous software components to communicate with an activity logging engine. Recording a user's analytic activities enables estimation of operational context and workflow. Combined with psychophysiology sensing, analytic activity logging further enables estimation of the user's arousal, cognitive load, and engagement with the tool. ALv2 University of California, Berkeley
Publications BDAS Infrastructure 2014-07 N/A BDAS, the Berkeley Data Analytics Stack, is an open source software stack that integrates software components being built by the AMPLab to make sense of Big Data. ALv2, BSD University of California, Berkeley
Publications Spark Infrastructure 2014-07 https://github.com/mesos/spark.git stats Apache Spark is an open source cluster computing system that aims to make data analytics both fast to run and fast to write. To run programs faster, Spark offers a general execution model that can optimize arbitrary operator graphs, and supports in-memory computing, which lets it query data faster than disk-based engines like Hadoop. To make programming faster, Spark provides clean, concise APIs in Python, Scala and Java. You can also use Spark interactively from the Scala and Python shells to rapidly query big datasets. ALv2 University of California, Berkeley
Publications Shark Infrastructure 2014-07 https://github.com/amplab/shark.git stats Shark is a large-scale data warehouse system for Spark that is designed to be compatible with Apache Hive. It can execute Hive QL queries up to 100 times faster than Hive without any modification to the existing data or queries. Shark supports Hive's query language, metastore, serialization formats, and user-defined functions, providing seamless integration with existing Hive deployments and a familiar, more powerful option for new ones. ALv2 University of California, Berkeley
Publications BlinkDB Infrastructure 2014-07 https://github.com/sameeragarwal/blinkdb.git stats BlinkDB is a massively parallel, approximate query engine for running interactive SQL queries on large volumes of data. It allows users to trade-off query accuracy for response time, enabling interactive queries over massive data by running queries on data samples and presenting results annotated with meaningful error bars. To achieve this, BlinkDB uses two key ideas: (1) An adaptive optimization framework that builds and maintains a set of multi-dimensional samples from original data over time, and (2) A dynamic sample selection strategy that selects an appropriately sized sample based on a query's accuracy and/or response time requirements. We have evaluated BlinkDB on the well-known TPC-H benchmarks, a real-world analytic workload derived from Conviva Inc. and are in the process of deploying it at Facebook Inc. ALv2 University of California, Berkeley
Publications Mesos Infrastructure 2014-07 https://git-wip-us.apache.org/repos/asf/mesos.git stats Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, MPI, Hypertable, Spark, and other applications on a dynamically shared pool of nodes. ALv2 University of California, Berkeley
Publications Tachyon Infrastructure 2014-07 https://github.com/amplab/tachyon.git stats Tachyon is a fault tolerant distributed file system enabling reliable file sharing at memory-speed across cluster frameworks, such as Spark and MapReduce. It achieves high performance by leveraging lineage information and using memory aggressively. Tachyon caches working set files in memory, and enables different jobs/queries and frameworks to access cached files at memory speed. Thus, Tachyon avoids going to disk to load datasets that are frequently read. BSD University of Southern California
Publications goffish Infrastructure 2014-07 https://github.com/usc-cloud/goffish.git stats The GoFFish project offers a distributed framework for storing timeseries graphs and composing graph analytics. It takes a clean-slate approach that leverages best practices and patterns from scalable data analytics such as Hadoop, HDFS, Hive, and Giraph, but with an emphasis on performing native analytics on graph (rather than tuple) data structures. This offers an more intuitive storage, access and programming model for graph datasets while also ensuring performance optimized for efficient analysis over large graphs (millions-billions of vertices) and many instances of them (thousands-millions of graph instances). ALv2

Original post on Hacker News

Stephen Law: How the US Treasury imposes sanctions on me and every other "Stephen Law" on the planet - my letter to OFAC

Comments: "Stephen Law: How the US Treasury imposes sanctions on me and every other "Stephen Law" on the planet - my letter to OFAC"

URL: http://stephenlaw.blogspot.com/2014/02/how-us-treasury-imposes-sanctions-on-me.html


Right, here's another thing I am getting off my chest - email letter to OFAC (edited slightly from version sent).

Dear OFAC

This correspondence is copied to my UK Member of Parliament The Right Hon. Andrew Smith. Please copy him into your reply.

My name is "Stephen Law". The name "Stephen Law" appear on OFAC's "specially designated nationals" list:

Here is the actual OFAC listing for "Stephen Law", alias of "Steven Law"

https://ofac.data-list-search.com/Entities/ByName/steven-law

This person is Burmese and is suspected by US Treasury of drug trafficking. He is the son of Lo Hsing Han (dubbed by US Treasury as "The Godfather of Heroin") and has a Singaporean wife. His addresses, as listed by you, are all in Burma and Singapore. None are in the UK.

I have discovered that, as a result of this listing, US Customs block shipments of goods to me here in the UK. Also when people try to wire me money from abroad (not just from the US, but from anywhere), for e.g. occasional travel expenses for academic conference attendance, the payment is interrupted and various checks are made before the funds are released. This became so bad during one period (a series of payments every single one of which triggered a block) that I had to switch to a different bank account. At no point was I told why this was happening (i.e. that you, OFAC, are responsible). The banks concerned believe they must keep this information from me (I was told this by my bank branch). Hence it took me many months to figure out what the source of the problem was: OFAC/US Treasury.

It appears any "Stephen Law" anywhere in the world will suffer this same treatment, as indeed will anyone who merely happens to have the same name or alias as one of your "specially designated nationals". This has proved frustrating, time-consuming and also costly to me personally. E.g. I have  paid US$77 postage for goods it turns out I can never receive because they are returned by US customs to the US vendor because my name is listed. As a result of the OFAC listing, I cannot now order goods from - or receive gifts from friends and relatives in - the United States.

Can you inform me: given I am very obviously NOT the Burmese Stephen Law:

(i) how I can avoid having all goods shipped to me from the US to my UK address being blocked and returned to sender by US customs?

(ii) how I can avoid my own bank repeatedly asking me who I am (and requesting information including my DOB, which they already possess) before unblocking any payment from abroad?

My bank knows who I am, and they know I am not the Burmese "Stephen Law" on the specially designated nationals list, but still I have to go through this same rigmarole every single time money is wired to me. How do I avoid this please?

Yours faithfully

Stephen Law

PS Ofac-caused delays to payments to me can run into weeks. On one occasion I ran up overdraft charges as a result of not receiving funds blocked by OFAC.

PPS I was interviewed by Foreign Policy magazine about all this a short while ago.Also interviewed by News Hour on BBC World Service.

Original post on Hacker News

The Science Behind 'Brain Training' - Dan Hurley - The Atlantic

Comments: "The Science Behind 'Brain Training' - Dan Hurley - The Atlantic"

URL: http://www.theatlantic.com/health/archive/2014/02/the-science-behind-brain-training/283634/


Increasing fluid intelligence has proven beneficial for people diagnosed with ADHD, and selling memory improvement is a big business. Are the claims overheated? 

anvilon/flickr

In 2002, Torkel Klingberg, a psychologist at Sweden’s Karolinska Institute, published a study involving 14 children with attention deficit hyperactivity disorder. All of the children were asked to spend a total of 10.5 hours, over five weeks, practicing computerized games that put demands on their working memory—their moment-by-moment attention and ability to juggle and analyze the objects of their attention.

Seven of the children played the games only at the beginner’s level; for the other seven, the games became progressively harder as the children got better. At the study’s end, the group who trained progressively not only improved on the games, but also on other measures of working memory. Their hyperactivity, as measured by head movement, lessened. And incredibly, even bizarrely by the standards of orthodoxy then holding sway, they also did much better on the Raven’s progressive matrices, long regarded as psychology’s single best measure of fluid intelligence. If the results were to be believed, the kids had gotten smarter.

"We don't have these huge studies that drug companies have. On the other hand, this is not something that is dangerous."

At first scoffed at, the little study has since led to dozens of other studies (15 of them listed here) aimed at replicating and expanding its finding. It also quickly led Klingberg and the Karolinska Institute to form a company, Cogmed, to turn working-memory training into a business. The initial target market was children with ADHD, whose parents hoped to find something other than drugs to improve their children’s attention; but soon the market expanded to treatments for both adults and children with a variety of cognitive disorders. By 2010, in a step suggesting just how vast a business this brain training could be, Cogmed was sold to Pearson, the largest education company in the world.

“Cogmed is a leader in the emerging field of evidence-based cognitive training,” the company states on its website. “We have scientifically validated research showing that Cogmed training provides substantial and lasting improvements in attention for people with poor working memory—in all age groups. That makes Cogmed’s products the best-validated products on the market.”

Aren’t claims like this rather overheated? I posed that question to Klingberg at Joe Coffee, a tiny, crowded coffee shop on West 23rd Street in Manhattan, where he was in town to give a talk at Columbia University. He wore a black leather jacket and a momentary scowl, having heard such critical questions many times before.

“Yeah, well,” he said with a shrug. “We did start to do research in 1999. Of course you can say that we still don’t have—we should wait another 10 years until we have thousands of participants. This is a general problem with cognitive training studies that we don’t have these huge studies that drug companies have. On the other hand, this is not something that is dangerous.”

He pointed out that Cogmed claims only that it can improve working memory, not fluid intelligence per se—even though many studies have found that working memory and fluid intelligence are closely related.

“What we see, over and over again,” he said, “is improvement of working memory and also of attention, including attention in everyday life. This is not everything, but it’s good enough for me if we can have that. Working-memory problems and attention problems are huge for many children and adults. Right now I don’t have any financial interest in Cogmed anymore. The influence I have had over Cogmed has been to make them very cautious. They don’t make claims about rejuvenating your brain or improving intelligence.”

At a total cost to families of about $2,000 for the 25 sessions, Cogmed compares favorably with many other kinds of ADHD treatments.

I asked him to describe exactly what kind of computerized training tasks Cogmed offers.

“There are 12 tasks,” he said. “They’re all visuospatial. The role of attention in working memory almost always has a spatial dimension. When you’re paying attention—even when you’re paying attention to me talking here in this café—there’s a spatial component. When there’s a loud noise, you might shift your attention to where it’s coming from. Being able to maintain your spatial focus on me is important for you right now. Even though it’s words coming from me, there’s an important component of space. So if you can improve the stability of that spatial aspect, you will be better at visuospatial tasks and be better at keeping your focus on me rather than on that noise over there.”

Still wanting a better sense of the exercises Cogmed offers, I scheduled a meeting with a clinical psychologist, Nicole Garcia, who offers the training just a few miles from my home in Montclair, New Jersey. She allowed me to sit down at her computer to play a handful of the games. (She emphasized that Cogmed calls them “training tasks,” not games. But they looked like games to me.)

I clicked one called Hidden, which showed a standard numeric keypad, the kind used on cell phones and calculators. The keypad was hidden while a man’s voice recited a short list of numbers. When his list was complete, the keypad reappeared, and I was supposed to click on the list—in reverse order. Another game showed a circle with nine smaller circles strung along it like carriages on a Ferris wheel. As the big circle slowly rotated in a clockwise direction, the little circles lit up in a random sequence. Once the sequence was completed, I had to click on the little circles in the same order.

All of the games were easy on the first pass, but immediately grew hard enough—meaning that the sequence to be remembered grew longer and was presented faster—that I began making mistakes.

Having offered Cogmed to a few dozen of her patients, as young as six and as old as 63 (including a successful attorney whose ADHD was diagnosed in her forties), Garcia said she’s convinced of its benefits, sometimes in combination with medication and sometimes on its own. And at a total cost to families of about $2,000 for the 25 sessions, she said, it compares favorably with many other kinds of ADHD treatments.

Where Cogmed beats all other forms of cognitive training is in the number of published, randomized clinical trials demonstrating its benefits and the number of trials still under way, led by independent researchers at leading institutions without any commercial connection to the company.

Julie Schweitzer, director of the ADHD Program at the University of California in Davis’s MIND Institute, conducted a randomized study of children diagnosed with ADHD.  When published in July 2012 in the journal Neurotherapeutics, Schweitzer’s study found that children in the placebo group spent just as much time off-task at the end of the study as they had at the beginning, but those who trained on Cogmed sharply increased the amount of time they spent doing school work.

"Around 20 to 40 percent of children treated for leukemia will end up with cognitive changes. For those treated for brain tumors, the figure is around 60 to 80 percent."

Children who have survived cancer are another group often in need of cognitive rehabilitation. “Somewhere around 20 to 40 percent of children treated for leukemia will end up with cognitive changes over time,” said Kristina K. Hardy, a neuropsychologist at Children’s National Medical Center in Washington, D.C. “For those treated for brain tumors, the figure is conservatively around 60 to 80 percent.”

What distinguishes these young survivors from most others seeking cognitive rehabilitation is that the effects of radiation or chemotherapy on the brain become apparent only with the passage of time. Immediately following treatment, a recent study found, survivors of acute lymphoblastic leukemia showed no significant change in their verbal IQ scores, but by early adulthood, their scores had dropped by an average of 10.3 points. 

In 2012, Hardy reported the results of a pilot study comparing Cogmed to a placebo form of computerized. Among 20 children who had survived either brain cancer or leukemia, those who trained with Cogmed saw substantial improvements compared to the placebo group on their visual working memory and in parent-rated learning problems.

Most recently, children with Down syndrome have been shown to benefit from Cogmed. “Following training,” concluded a study published last June, “performance on trained and untrained visuospatial short-term memory tasks was significantly enhanced for children in the intervention group. This improvement was sustained four months later. These results suggest that computerized visuospatial memory training in a school setting is both feasible and effective for children with Down syndrome.”

Brian Skotko, co-director of the Down Syndrome Program at Massachusetts General Hospital, told me, “If Cogmed was a drug, everyone would call this study groundbreaking.”

Not all studies of Cogmed have been positive. A large one published in October found little benefit. But as Klingberg has written in defense of Cogmed in particular and working-memory training in general: “Working memory training is still a young field of research. As with all science, no single experiment explains everything, and results are never perfectly consistent … Many questions remain. But there is no going back to the notion that working memory capacity is fixed.”

This post is adapted from Dan Hurley's Smarter: The New Science of Building Brain Power.

Original post on Hacker News

Explore – …the appointee’s wife was granted a divorce from...

Comments: "...was granted a divorce because he was constantly working calculus problems"

URL: http://explore.noodle.org/post/73954140444/the-appointees-wife-was-granted-a-divorce-from


Original post on Hacker News

Phaser

Comments: "Phaser"

URL: http://phaserapp.com


Original post on Hacker News

UpCounsel Launches Outside General Counsel Program For Companies Needing Long-Term Legal Help | TechCrunch

Comments: "UpCounsel Launches Outside General Counsel Program For Companies Needing Long-Term Legal Help | TechCrunch"

URL: http://techcrunch.com/2014/02/14/upcounsel-outside-general-counsel/


Attorney marketplace UpCounsel has spent the last several months helping startups and other small businesses get affordable legal help. But for the most part, that help has mainly been focused on short-term projects that don’t require a ton of assistance over a longer period of time.

The startup hopes to change that with a new service that will connect technology companies in San Francisco with an outside general counsel to replace the legal help they’d usually get from a traditional law firm. Those firms can charge up to $800 per hour to work with a partner, but UpCounsel believes that by setting companies up with attorneys on its platform, it can drastically reduce that cost over time.

The Outside General Counsels it connects startups with are former senior associates and partners from large firms who have previously served in the general counsel role of technology companies. The only difference is now they’re working virtually through the UpCounsel platform.

To make sure that the general counsels are a good fit, UpCounsel does interviews with interested companies to determine what their needs are, and then tries to pair them with attorneys who understand their business and have the correct skill sets to support them. Attorneys get a company profile and dossier to review only if they are selected by the company to possibly represent them.

More than just making the connection between startups and the outside general counsel, UpCounsel also handles all admin and support for them. That includes billing, but also means helping them to find paralegals as well.

As part of the program, UpCounsel is also opening up not just to attorneys, but to professionals who have served as paralegals to support them. Again, since it doesn’t have all the overhead of the big firms, paralegals can be billed at about a third of what the big firms charge for their hourly work.

In addition to the lower cost, UpCounsel believes that its outside general counsels will be more responsive to legal requests than the folks who work at more traditional firms that are loaded up on casework. CEO Matt Faustman tells me that in its early trial, some startups have used the program to complement their existing firms for when they need more immediate help.

But the platform is also seeing some startups move completely to adopt its outside general counsels, with about 10 totally jumping ship from some big firms you’ve probably heard of.

Now that it’s launched, we’ll see how well the program actually works. In the meantime, UpCounsel has raised $1.5 million in seed funding from folks that include Homebrew, Bobby Yazdani, SV Angel, Collaborative Fund, Haroon Mokhtarzada, and other angels.

Original post on Hacker News

First geologic map of Ganymede made with Voyager data | Ars Technica

Comments: "First geologic map of Ganymede made with Voyager data | Ars Technica"

URL: http://arstechnica.com/science/2014/02/first-geologic-map-of-ganymede-made-with-voyager-data/


Imagery of Ganymede's surface (right) and the new map of its geology (left). USGS

Maps have always been an integral part of exploration. They take the in out of terra incognita. Some things are easier to map than others, of course. The geology of a world a few hundred million miles away is one of those other things. Nevertheless, the United States Geological Survey just released a geologic map of Jupiter’s moon Ganymede—an icy satellite larger than Mercury.

The map was created through the hard work of a team led by Wheaton College’s Geoffrey Collins using imagery from the Voyager probes and the more recent Galileo mission. Much in the way that geologists can determine the relative ages of Earth rocks by noting which rocks cut into or through others, Ganymede’s surface can tell us about its own geologic history.

The researchers identified three basic periods in that history, which they named the Gilgameshan, Harpagian, and Nicholsonian periods. The oldest is marked by an abundance of impact craters, the second by extensive tectonic activity that altered and deformed the surface, and the youngest by an absence of significant activity.

A PDF of the detailed map, which might look great on your wall, is available on the USGS website.

Original post on Hacker News

Rendered Prose Diffs · GitHub

Comments: "Rendered Prose Diffs · GitHub"

URL: https://github.com/blog/1784-rendered-prose-diffs


Today we are making it easier to review and collaborate on prose documents. Commits and pull requests including prose files now feature source and rendered views.

Click the "rendered" button to see the changes as they'll appear in the rendered document. Rendered prose view is handy when you're adding, removing, and editing text:

Or working with more complex structures like tables:

Non-text changes appear with a low-key dotted underline. Hover over the text to see what has changed:

Building great software is about more than code. Whether you're writing docs, planning development, or blogging what you've learned, better prose makes for better products. Go forth and write together!

Need help or found a bug? Contact us.

Original post on Hacker News

Real-time tech support with Olark — break the bit

Comments: "Real-time tech support with Olark integration into your client-side app"

URL: http://breakthebit.org/post/75335427347/real-time-tech-support-with-olark


Foreword

We came a long way since we started the public beta testing of Dubjoy, the video voice-over solution in the browser.

We’re targeting a pretty specific public: Language Service Providers, voice actors, voice talents, translators and interpreters. This is mostly a non-technical blend, that like most people, can’t help you efficiently debug, reproduce or describe problems as they occur.

We came a long way since then, with most of the problems being solved and the software approaching production quality fast.

But still, we were desperate for a way to better assist our customers, preferably in real-time.

There are many moving parts to a voice recording app in the browser. And moving parts always make for a fun array of problems customers encounter.

We have to be able to assist people while learning how to use the software for the first time. There can be workflow issues, microphone sensitivity and permission problems, Flash versions, etc.

Communication with your customers is key, and I can’t describe how overwhelmingly positive the reactions have been.

They love to see that you have their back at all times.

Required capabilities

So our ideal system would have the following capabilities:

  • real-time feedback and help,
  • log forwarding,
  • error forwarding,
  • diagnostic checks,
  • recovery routines,
  • reset routines,
  • system information.

And we set out to find something out there that could help.

Olark

If you’re selling anything on the web and don’t use Olark you’re missing out.

They’re making a chat widget, that you integrate into your site. Once installed your website visitors appear as chat contacts in your IM program like Google Talk.

You can click on anyone and start talking them up, or wait that someone need help and clicks the “Chat with us” button.

So this tool by itself eliminates our first capability requirement: real-time feedback and help.

But as we delved deeper into their developer API documentation we discovered that you can easily extend the functionality of the chat widget by creating your own custom commands and binding to certain events, like when the conversation begins.

Anatomy of our Olark integration

Olark has a nice feature that chat operators can issue commands.

A command is simply any word prefixed with an exclamation, like !debugon.

The “brain” of the integration is a simple parser, that hooks up to the api.chat.onCommandFromOperator event.

A simple example of this, an implementation of the !explainer command:

olark 'api.chat.onCommandFromOperator', (event) ->
 # !explainer
 # Show the popup with the explainer video.
 if event.command.name is 'explainer'
 V.olark.send "[EXPLAINER] Showing popup"
 V.overlay.help()

So Dubjoy’s current integration looks like this (you can list the current commands by issuing !dubjoy:

# !dubjoy
# Show Dubjoy help and commands.
if event.command.name is 'dubjoy'
 V.olark.send "[HELP] Dubjoy Olark integration\n"
 V.olark.send "!status - Status report (env, kombajn, video, frags, app)"
 V.olark.send "!debugon [<level>] - Turn on log fwd at a level"
 V.olark.send "!debugoff - Turn off log fwd"
 V.olark.send "!reload [<restore>] - Force reload and restore state"
 V.olark.send "!undo - Undo last step"
 V.olark.send "!explainer - Show the explainer popup"
 V.olark.send "!fragments - Output detailed fragments data"
 V.olark.send "!fragmiss - Output fragments that are missing audio"
 V.olark.send "!fragclear <fragment_id> - Clear audio data for fragment"
 V.olark.send "!fragseek <fragment_id> - Seek to fragment"
 V.olark.send "!diagnose - Run system checks"
 V.olark.send "!system <details> - Show system information"
 V.olark.send "!video - Show video information, such as YouTube ID, etc."
 V.olark.send "!mic - Show if mic access is allowed"
 V.olark.send "!audio - Show the average and latest audio levels"
 V.olark.send "!audiotest - Send the audio test URL and instructions"
 V.olark.send "!exec <command> - Execute command in V. namespace (e.g.: '!exec video.play()' would execute 'V.video.play()'"

We have a variety of commands for performing basic to advanced tasks to help the customer, run diagnostic tests and check system information.

If you prefer reading code to prose and to learn more, check out our Olark integration class.

Real-time feedback and help

It’s super useful to be able to help customers who are lost or just in the app for the first time.

One of our options is to just issue a !explainer command and this pops up a short video on the customer’s screen, showing her how to use the app.

Using Olark this way makes customers feel like you always have their back. And they appreciate this and in turn easily forgive you the early bugs.

Log forwarding and error reporting

We rely heavily on logging to know in-depth what’s going on in our application. Everything from events firing, video currentTime updating to audio encoding progress is being logged, with various levels of specificity.

It doesn’t hurt to be instantly alerted whenever a breaking error occurs on the client side either. It instantly pops up a chat window to all of our engineers and the one that responds to it first, takes care of the customer.

Now imagine this scenario:

A customer is trying to voice-over a video in Dubjoy and gets stuck with the voice not recording.

She finds the Olark widget and writes us “Help! My voice is not being recorded!”.

We reply “Just a second” and turn on log forwarding with a !debugon custom command implemented for the Olark API.

This starts to forward all of the logging data the customer is generating instead of in his console to your chat window. All in real-time.

This is beyond powerful, you can instantly see what the customer is doing or clicking and of course what the app itself is doing.

How do we do it?

We have some central logging routines that we use throughout the app.

This made it trivial to plug in some code, that also sends this through Olark. This isn’t done by default, but rather the current operator can turn this on or off as simple as issuing the !debugon command.

This is a snippet from the logging routine that integrates with Olark:

# Log to Olark
if V.olark 
 if output_routine is 'error'
 # Log errors ALWAYS to Olark.
 V.olark.log type: 'ERROR', text: msg, args: args
 else if V.olark.debug and V.olark.debug is true
 # Forward logs to Olark if debugging is turned on.
 V.olark.log text: msg, args: args

A typical use case looks like this:

Diagnostic checks

It’s useful to have a single command that runs some basic diagnostic checks on the customer’s system.

We quickly check if the microphone permissions are set correctly, if the mic sensitivity is in order, Flash version, native MP4 support, video buffer status and more.

Here’s a whipped !diagnose command in action:

We also have detailed audio diagnosis commands like !audio that show the gain levels of the last recording the customer did. This way we can quickly see if the mic’s not getting any information through or the sensitivity is too low, etc.

Recovery and reset routines

It’s useful that if a customers’ session gets corrupted somehow we can trigger a reload on his end and choose whether we want to restore his data or start with a clean slate.

System information

Olark by itself shows some customer’s system information, like browser version in the tooltip of your IM app.

But it’s useful for us to have a manual way of bringing up some relevant information.

By issuing !system we get a bunch of useful information about his browser and system.

By issuing !video we get crucial information about the video he’s currently working on.

Conclusion

We’re extremely happy with the way this turned out and so are our customers.

The extensibility of the Olark platform offers unlimited potential for customer support in any JavaScript client-side application.

Original post on Hacker News

Cryptic Crossword: Amateur Crypto and Reverse Engineering

Comments: "Cryptic Crossword: Amateur Crypto and Reverse Engineering"

URL: http://www.muppetlabs.com/~breadbox/txt/acre.html


Original post on Hacker News

The Coinbase Blog — Introducing “BitHack”: Hackathon by Coinbase

Comments: "The Coinbase Blog — Introducing “BitHack”: Hackathon by Coinbase"

URL: http://blog.coinbase.com/post/76553987867/introducing-bithack-hackathon-by-coinbase


We’re excited to announce the launch of Bithackathon.com – an online hackathon to inspire developers across platforms and continents to build solutions with bitcoin.

 

We will judge app entries based on creativity, usability, and execution. The prize? Bitcoins!

  • 1st prize: $10,000  worth of bitcoin
  • 2nd prize: $5,000  worth of bitcoin
  • 3rd prize: $3,000  worth of bitcoin

Criteria

We are looking for apps that excel in three areas:

1) Creativity:

  • Originality of idea
  • Innovation

2) Usability:

3) Execution:

We invite any and all developers around the world to participate in the competition.  Check out www.bithackathon.com for more information – spread the word!  Look forward to seeing you there.

Original post on Hacker News

- 页面 4 的 5 -