Cognitive computing – a skill-set widely considered to be the most vital manifestation of artificial intelligence.
These Two Gay Renaissance Men Revolutionized How LGBT People Communicate
After finding success with everything from "alternative lifestyle" stores to the earliest gay websites to the next generation of robotics, Andy Cramer and Al Farmer could be excused for wanting to retire to a quiet hamlet like Provincetown or Key West. But the husbands and business partners are just getting ready to roll up their sleeves, with the hope of making it easier for LGBT entrepreneurs like themselves to succeed.
Cramer and Farmer, based in San Francisco, are currently using their tech background to strengthen the mission of StartOut, a nonprofit that connects LGBT business owners and assists with mentoring and funding. They are also looking to help minority entrepreneurs through their own website, Alternative Spaces, by offering web and mobile development services.
"We're really trying to change the dialogue by reporting the number of jobs made [by LGBT people]," Cramer says. "By reporting that this gay couple now employs 100 people in a small town, it's changing the conversation. If we don't change the conversation, they'll call us names forever."
Cramer has been working in the LGBT space for over 40 years. In the '70s, he founded and operated 10 Headlines stores, famous on Castro and Polk Streets in San Francisco, as daytime gathering places for the burgeoning queer community. The stores offered a welcoming and offbeat environment to meet, greet, and shop for "alternative lifestyle" merchandise — e.g., costumes, wigs, and toys — as it was known then. The stores featured same-sex couples together in store windows, a bold display for the time. Cramer pushed Halloween as an unofficial gay holiday, ran “Around the World” trip giveaway promotions, sold tickets to major parties and events, and ensured that every customer was treated with dignity.
By 1981, though, AIDS was starting to ravage San Francisco. The community mobilized rapidly, and the Headlines stores became centers of compassion and involvement. Two-thirds of the Headlines employees were living with HIV or AIDS, Cramer says. The workers split shifts into two-hour segments for employees unable to work longer hours and the stores donated warm clothing for patients suffering chills due to pneumocystis pneumonia. Baskets of condoms were displayed at every cash register, each selling for a penny, and over 8 million condoms were eventually distributed.
Meanwhile, Cramer worked with Tom Waddell, a former Olympian, to create the first Gay Games, to prove the community was more than the sick and dying. The experience of working at Headlines showed Cramer the importance of taking care of one's own.
"We were living in a war zone," Cramer says. "Organizing, supporting, and fighting back was what the community did alongside each other, long before there was any assistance from outside resources."
Andy sold Headlines in 1993, after discovering there was a way to reach far more LGBT people — the Internet. With no prior experience in technology, Cramer founded Gay.net, the first online site offering a graphical interface overlaid on a bulletin board service. The website was also the first uncensored online meeting place for gay and bi men, where they could be themselves and connect with others, even if they lived far outside the gay meccas.
Cramer also worked with the gay newspaper guild in 10 major cities to publish local and national news so remote community members could access the latest headlines. Initially, Gay.net used dial-up modem connections; it limited the number of participants online at any one time to 16, which was the number of modems available to start. That number increased to 64 modems quickly, but that still limited access. One member — Farmer — couldn’t connect for a month because the modems were always busy. He threatened to quit. Cramer refunded his $9.99 and apologized, and a relationship was born — a business one at first.
Farmer, like Cramer, felt a desire to be part of the LGBT rights movement. He led the University of New Hampshire in the 1993 March on Washington, joined the UNH LGBT Center, and facilitated on-campus coming-out groups and education on queer issues.
After becoming a computer scientist who would later work for the Department of Energy and IBM, Farmer wanted to help Cramer broaden Gay.net's reach and allow more connection between LGBT people.
For two years, Cramer and Farmer worked exclusively on Gay.net by chatting online, Cramer in San Francisco and Farmer in Boston. After spending thousands of hours together online, helping people talk about coming out and supplying them with tools to engage in meaningful relationships, the two men fell in love themselves. The shared empathy for those who were hurting and alone and a common mission to build a worldwide community using supportive technology did the trick, they say. Farmer moved to San Francisco in 1997, and the two have been together ever since. They had a commitment ceremony in 1998 and married on their 10th anniversary in 2008, when same-sex marriage was briefly legal in California.
By 1995, Gay.net had 10,000 paying members; Cramer merged existing online gay online properties, including Gay.com and onQ on AOL, and soon had more than more than a million members communicating online. The company pivoted to an advertising model, bringing in American Airlines and IBM as some of the first mainstream companies willing to publicly promote to the gay community. In late 1999, after six nonstop years and with millions of members online, Cramer and Farmer sold a portion of their stock and exited the company. Investors took the company public in 2004 at a $145 million valuation.
Wanting to use the capital they earned in helping others, Cramer and Farmer founded Azure Wellness, working with a prominent San Francisco HIV physician to formulate supplements for people with HIV. They learned a lesson: HIV-positive people don't want to take more pills. Cramer and Farmer closed the company and donated $250,000 in supplements to people living with AIDS.
"It was the most satisfying failure in my career," Cramer says.
In the ensuing years, Cramer built out concept stores for Stadtlander Pharmacies, repositioning them as community HIV pharmacies, offering Eastern and Western remedies and educational opportunities. Cramer and Farmer continued their tech work as well, developing a platform that provided business applications and analytics. The men also worked at supporting public companies they respected, and that work led them to StartOut, a nonprofit that aims to connect 100,000 LGBT entrepreneurs online.
StartOut has six active local chapters, and by expanding its membership base, it's creating more connections and helping LGBT entrepreneurs everywhere, Cramer says. Utilizing some of Cramer and Farmer's programs, StartOut expanded its services. StartOut’s community online service now includes a mentorship portal, a funding portal that matches companies that are raising funds with accredited investors, a permission-based community directory to find other LGBT entrepreneurs for collaboration, and a public business forum. Basic membership is free, and upgrades are inexpensive and tax-deductible.
The men see StartOut as a way to not only uplift LGBT businesses but also chip away at intolerance. "You have collectivism that's totally economic," Cramer says. "How many of your employers will go home at night and say, 'Don't call those people fags, because they're our bosses and are good to us and give us benefits?'"
Cramer and Farmer have founded several other companies, most notably Alternative Spaces, a professional outsourcing technology company with over 100 developers, project managers, and designers. Alternative Spaces produces applications used by many popular websites; one such client is the LGBT home rental site Misterb&b, which outsourced nearly all of its technology to the company.
On top of all this, Farmer is a self-described "futurist" and has been working with artificial intelligence and social robotics for four years. He's leading the work with Amazon’s Echo, Google Home, and other voice-assistive devices. Farmer is currently working with Jibo, a company that will soon be launching the world’s first social robot.
Even with all their varied interests, they are most satisfied helping people find success and contentment.
"We plan to promote technology that helps individuals lead better lives," Farmer says of their future.
Cramer and Farmer will be sharing their insight into emerging technologies and technology production strategies in a summer series they're hosting. To register for an online seminar, visit Alternative-Spaces.com.
As of 2016, there were 1 million job openings for home-care and 35 million people working as home-care workers who are not paid (children taking care of parents or grandparents, uncles or aunts). The 'baby boomers' are about to enter the market, and 90% have a preference for staying at home over entering any care facility.
By 2030, older adults 65+ will grow by 81%
Older homebound people often complain of feeling isolated and marginalized. But thanks to innovative services and products that help them feel connected — such as virtual reading groups — the barriers created by aging are starting to improve.
By 2020, the nursing workforce will drop 20% below projected needs. Home Care will add one million jobs with the highest growth occupations, 2012-2022. We will modernize and grow this workforce into the future of home care.
2016 Caregiving Innovation AARP
|File Size:||6282 kb|
2016 aging2.0 senior care innovation & tech use survey report
|File Size:||3693 kb|
We feel that storytelling is intrinsic to our society and can bring joy and education to people of all ages. Storytelling is 10,000 years old and most likely the oldest form of emotional communication and learning. Remember that storytelling predates writing and requires a storyteller and a good listener.
We are currently working on a child's book using AI using technology that will allow the audience to communicate with an IOT device and choose different paths to create different endings. Designed for middle school children, we are imagining people of all ages being able to set up their stories and add content.
Our primary focus is eldercare. It's particularly close to me since I am in my late 60's and my mother has been in assisted living for years. She's had back surgery and advanced arthritis and is unable to walk more than a few steps. Added to that, she has late stage macular degeneration and is hard of hearing. Her isolation is painful, and Al and I bought her an Echo, hoping it would provide company and allow her to listen to the books she enjoys. She enjoys listening to music, but at 90 and almost blind, she is unable to remember or read the commands to wake up Alexa and ask for specific books or explore new forms of entertainment. My mother was an avid reader her whole life and even learned to convert books into Braille 30 years ago. Now, unable to manage to get out and around has created a feeling of isolation. Last week she told me that she is lonely and bored and that any voice would be a blessing; even one that is AI generated.
I'm going to continue to write about our progress. As a baby boomer, I can see what is ahead. Forbes reported, "By 2020, 117 million Americans are expected to need the assistance of some kind, yet the overall number of unpaid caregivers is only projected to reach 45 million". Advances in AI will allow facial recognition and elder Americans will be more able to exercise their minds, speak with their families, remember to take medication and have an assistant who will provide choices and companionship.
Why is this the case? For some systems, the local hardware is listening for this attention command by processing all audio it hears, sorting through sounds, looking for the "Attention Command." Once acknowledged as the correct command, the system then goes into cloud mode, where it sends your audio to the cloud to be processed. Here in lies the rub, for privacy we don't have a cloud system listening to our every conversation and sending it up to the cloud, so only when you get the attention of the system is that audio sent.
Given that the "Attention Command" is the only way a voice can activate the system, it then requires previous knowledge of the particular "Attention Command" and that the word itself is a command to start listening. If that is unknown, a user might speak the first three words of the sentence before the command to start listening has been activated. That's why the "Attention Command" with a pause after it is more effective.
The idea felt in the industry is to hide this complexity from the user, seamlessly providing a simulation of human dialogue. Example: "Alexa, what's the weather" - produces the correct response if the system is setup correctly. As we grow accustomed to asking the same thing each day, in such a nonchalant manner, we forget the uniqueness of this "weather command." It's just a single series of words that activates the request. But after the command has executed, Alexa is back to listening for Alexa, no longer focused on you. The reason is this AI is not 'an AI,' it is many AI systems acting as a single AI. It's just a big set of systems waiting for the next command. When we break down our example, “Alexa, what’s the weather?" we can see this simple question is a complicated process.
First, the hardware in Amazon’s Echo contains an AI listening and processing audio for the word “Alexa,” from all the sounds around it, all the time it is on. Listening and processing audio is the first AI that helps with this question. Local Command Word Processor (local speech to command) is AI #1,
Second, the audio containing the spoken words "what’s the weather" turns audio into text. This Voice to Text AI uses multitudes of recorded utterances to piece together the sounds into letters and words. In this case, it needed to understand a conjunction of “what is” as "what's." Voice to Text AI is AI #2.
Third, Natural Language Processing is used to turn, "what’s the weather?" into commands for this AI that specializes in the meaning of the text. Natural Language Processing turns, "what’s the weather" into commands. Natural Language Processing is AI #3.
When the intention of the user is determined, programs use a locally defined zip code and request the weather from the internet weather source for this location, then organizes the results into reply text. This text is, in turn, passed to AI #4, Text to Speech. It creates the returned text as audio, matched from multitudes of recorded utterances to construct the sounds into words, and words into sentences. Additionally, adding a cadence and creating an audio file is then passed to the Alexa speaker to play.
With four AI systems working together, the complexity of what is involved in delivering a spoken audio response from Alexa is enormous.
Teaching users how to use an AI system is a problem for the entire AI industry. It's because we don't address each other by first calling out their name, waiting for an acknowledgment before then speaking a command. When we want something from someone, we say it all together. For example, 'Al turn on the lights' will come out in one breath without a pause. It's not natural for humans to pause after calling the name of someone especially when we're feeling physically comfortable ourselves. The Industry problem I spoke of; these AIs all exist in that environment where we feel comfortable; therefore they must adapt to us for best adoption and lower attrition rate. The AI systems are in part, built for human comfort. Therefore, the nuance of a simple insertion of a pause is in the way of faster adoption. Chances are, if you know what you want and can see the sentence in your mind before saying it, then call out the AI start command, say, 'Alexa,' then wait for the system to acknowledge your request for attention, then speak the sentence. Then and only then you almost always get what you want and quickly.
Currently, a user must think about what information they want from our AI systems before speaking to them to ensure an accurate response. Understanding how these systems work and how we communicate with each other is an important first step towards full adoption.
For example, understanding command word requirements when using an AI platform with voice is essential. The acknowledgment command, usually the local hardware is listening for this command and does the processing of all the audio it hears, sorting through sounds, looking for the Attention Command. Given that the attention command is the only way to use your voice to activate the system, it requires previous knowledge of the command, and that the word itself is a command to start listening. If you don't know that, you might speak the first three words of the sentence before the command to start listening has been activated. That's why using a pause after the attention command is best. We see this as a penultimate need for our industry to bridge this gap.
The current AI systems, such as Amazon's wireless speaker, Echo, with the Alexa assistant, is connected to your Amazon data it uses to generate responses. Alexa consists of groups of cloud-based and local-based AI systems that work together to perform a single command, or at most two or three at a time*; only after activated correctly (* josh.ai and Google can process multiple commands given at one time)
The industry is hiding this complexity from the user, seamlessly providing a simulation of human dialogue. For example, if we use Alexa to examine the difference - "Alexa, what's the weather" - produces the correct response if the system is setup correctly. As we grow accustomed to asking the same thing each day, in such a nonchalant manner, we forget the uniqueness of this 'weather command.' It's just a single command and a single way to activate the response. But after the command has executed, Alexa is back to listening for "Alexa," and no longer focused on the user. This medium is not AI; it is just a big set of deep learning AI systems acting as a single responder and waiting for the next command.
When we break down the example, “Alexa, what’s the weather?", a complicated process takes place. First, the hardware in Amazon’s Echo contains an AI listening and processing audio for the word “Alexa,” from all the sounds around it, all the time it is on. This listening and processing are the first AI assistant that helps with this question. Let's call it AI #1, Local Command Word Processor (local speech to command)
Second, the audio containing the spoken words turns "what’s the weather" from audio into text. This Voice to Text AI uses multitudes of recorded utterances to piece together the sounds into letters and words. In this case, it needed to understand a conjunction of “what is” as "what's." The Voice to Text AI supports AI #2.
Third, another AI that specializes in the meaning of the text is involved. This Natural Language Processing AI turns "what’s the weather" into commands, and we can call it AI #3.
When the intention of the user is determined, programs use a locally defined zip code and request the weather from an Internet weather location source, then organizes the results into reply text. This text is, in turn, passed to the AI #4 that helps with this question. Text to Speech creates the returned text as audio, matched from multitudes of recorded utterances to construct the sounds into words, and words into sentences. Additionally, adding a cadence and creating an audio file is passed to the Alexa speaker to play.
With four AI systems working together, the complexity of what is involved in delivering a spoken audio response from Alexa is enormous. At this point, the solution AI systems are solving for is becoming a problem for the AI industry as a whole: how to teach users to manage an AI system.
Making the system so easy to use that there is no thought involved is premature. It is still critical that we think about what we want from our AI systems before speaking to one if we want to ensure an accurate response.
I'm an entrepreneur, community builder and technology innovator who deeply cares for those of us who end up alone in a home or a room with nothing to do and no way to do it. The technology is here, and with some effort, we can bring it to those in need of a companion or access point to family and society. This project will be helpful beyond measure and is necessary right now everywhere someone sits alone suffering from the disconnected loneliness and loss of purpose. There are vital members of our society within this community within our reach.