Marissa Alexander, the black Florida mother who was sentenced to 20 years in prison for firing a warning shot against her abusive husband, will get a new trail this spring. But her supporters were calling for the charges to be dropped altogether.
As we’ve previouslycovered, Alexander had a restraining order against her husband when he yelled, “Bitch, I will kill you!” and charged toward her during the incident in 2010. She fired a single shot into the ceiling, and no one was hurt. The sentence Alexander received would seem absurdly harsh in general, but especially since we’re talking about Florida here, where the right to “stand your ground” apparently applies to aggressors “threatened” by a bag of Skittles, but not to abused black women.
Last month, a court overturned Alexander’s original guilty verdict, and activists have called for the charges to be dropped. Instead the state is going to prosecute her once again. Alexander, who has already been in jail for three years while this all plays out, will find out next week whether she will be released on bail. The Free Marissa Now campaign will be fundraising to cover her legal costs for the new trial in March. The goal is to raise $10,000 by the end of the year, and you can help here.
“Ultimately, most things that are offensive are also lazy and unoriginal; because you can’t reach that point of view by looking at the world honestly…You reach that point of view by taking short cuts and by just sort of repeating what someone else told you.”—Joseph Fink (via givesgoodface)
Google no longer understands how its “deep learning” decision-making computer systems have made themselves so good at recognizing things in photos.
This means the internet giant may need fewer experts in future as it can instead rely on its semi-autonomous, semi-smart machines to solve problems all on their own.
The claims were made at the Machine Learning Conference in San Francisco on Friday by Google software engineer Quoc V. Le in a talk in which he outlined some of the ways the content-slurper is putting “deep learning” systems to work.
"Deep learning" involves large clusters of computers ingesting and automatically classifying data, such as pictures. Google uses the technology for services like Android voice-controlled search, image recognition, and Google translate, among others. […]
What stunned Quoc V. Le is that the machine has learned to pick out features in things like paper shredders that people can’t easily spot – you’ve seen one shredder, you’ve seen them all, practically. But not so for Google’s monster.
Learning “how to engineer features to recognize that that’s a shredder – that’s very complicated,” he explained. “I spent a lot of thoughts on it and couldn’t do it.” […]
This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This “thinking” is within an extremely narrow remit, but it is demonstrably effective and independently verifiable.