
For press inquiries or to express interest in being featured on our blog, please contact press@aiandyou.com.
Top Jobs Likely to be directly impacted by AI
By Susan Gonzales on 08/18/2025 @ 09:08 AM
A recent Research study of 200,000 people online in 2024 measured how much AI overlaps with everyday job tasks. They created an “AI applicability score” to rank jobs by vulnerability.
Top 10 Most AI‑Vulnerable Jobs
These roles involve lots of writing, speaking, or regularly handling information—areas where AI already shines:
- Interpreters and Translators
- Historians
- Passenger Attendants
- Sales Representatives (Services)
- Writers and Authors
- Customer Service Representatives
- CNC Tool Programmers
- Telephone Operators
- Ticket Agents / Travel Clerks
- Broadcast Announcers and Radio DJs
AI tools can already perform many tasks these jobs require—such as drafting text, researching facts, answering questions, and providing advice or translations.
Jobs Most Resistant to AI
On the flip side, the study identified jobs unlikely to be disrupted by AI because they require physical presence, human touch, or specialized manual skills:
- Nursing Assistants
- Phlebotomists
- Hazardous Materials Removal Workers
- Roofers, Dishwashers, Maids/Housekeeping
- Industrial Truck and Tractor Operators (i.e., truck drivers)
- Surgical Assistants, Massage Therapists
- Stone–and‑concrete finishers, tire builders/changers
- Bridge or Lock Tenders and Water Treatment Operators
These jobs involve hands‑on work, physical dexterity, or direct interaction—tasks that AI cannot perform or step into yet.
What Does This Mean?
- High‑risk jobs face strong AI compatibility: much of their daily work aligns closely with what tools like Copilot already do.
- Low‑risk jobs rely heavily on real-world presence or specialized physical work—things AI struggles to replicate.
This doesn’t necessarily mean roles will disappear entirely. AI often acts as a support tool—freeing up humans to focus on strategy or more complex decisions.
But for highly automated tasks, workers risk being outpaced by people who know how to leverage AI effectively.
Final Thoughts
If your job centers on writing, translating, customer interactions, analysis, or planning—it’s likely to be at higher risk of change from AI.
Meanwhile, careers in hands-on trades, physical tasks, healthcare support, or skilled manual labor remain relatively AI-safe for now.
What's most important here is to embrace AI literacy and understand AI basics as your first steps. Rather than trying to avoid AI, consider learning to work with it. Roles that combine human empathy, critical thinking, creativity, or physical skills—complemented by AI support—are more likely to thrive.
To learn more about the ongoing conversation that inspired this blog, check out this article from CNBC Make It.
Ways you should not use AI at work - Be Aware
By Susan Gonzales on 08/11/2025 @ 08:28 AM
Generative AI tools are an incredible resource. Keep in mind they are tools! While it can be tempting to fully rely on them at work - remember - they are a resource. Beware of becoming dependent on these tools.
9 Things You Shouldn’t Use AI For at Work
AI is becoming a powerful tool in the workplace—helping us write emails, summarize reports, generate ideas, and even automate repetitive tasks. But just because AI can do something doesn’t always mean it should. Using AI the wrong way can lead to privacy risks, bad decisions, legal issues, or damage to your professional reputation. It’s important to know where to draw the line and when human judgment still matters most. Below are 9 situations where using AI at work can do more harm than good—and how to handle them more safely.
1. Don’t paste confidential or private data into public AI tools
AI tools like ChatGPT and others may store your inputs. That means any client info, internal documents, employee details, or business strategies you share could be at risk. If it’s not something you’d post on a public website, don’t put it into a public AI system.
2. Avoid using AI to write or review contracts and legal documents
AI doesn’t understand the law—it just predicts words that “sound right.” Relying on it for contracts, terms of service, or policy language can leave out important legal protections or introduce serious errors.
3. Never use AI to make hiring, firing, or HR decisions on its own
While AI can help sort resumes or draft interview questions, it shouldn’t be the final decision-maker. Automated decisions can introduce bias, misinterpret context, or create legal and ethical issues—especially in hiring or termination.
4. Don’t use AI to give legal, financial, or medical advice
AI is not a lawyer, accountant, or doctor. While it can provide general information, using it to make decisions in these areas can lead to misinformation, legal violations, or even harm to others. Always consult a certified professional.
5. Avoid presenting AI-generated work as completely your own
If AI helps draft a document, that’s fine—just be honest about it. Passing off AI work without disclosure can lead to ethical concerns, credibility loss, or even accusations of plagiarism.
6. Be careful using AI to respond to customers or the public
AI chatbots can sound robotic or give incorrect answers. If not reviewed, they can accidentally misrepresent your business, offend someone, or share outdated or misleading info. Always keep a human in the loop.
7. Don’t let AI write or review code without human checks
AI can help developers work faster, but it also makes mistakes—sometimes dangerous ones. It might generate insecure or buggy code, or overwrite important files. Always double-check AI-generated code before using it.
8. Avoid relying on AI note-takers for sensitive meetings
AI transcription tools can misinterpret what’s said and sometimes save or share the wrong things. They may pick up private side comments or jokes that shouldn’t be documented. Use them with caution—and always inform participants.
9. Don’t let AI make complex ethical or leadership decisions
AI can’t feel, reason through complex emotions, or understand nuance. Using it to make calls on sensitive issues—like company values, employee well-being, or crisis responses—can come across as tone-deaf or impersonal.
How to Use AI Wisely at Work Use AI for brainstorming, drafting, and routine tasks—not final decisions.
Always review and edit AI-generated content.
Don’t share anything you wouldn’t want public.
Be transparent when AI helps create your work.
Keep humans in charge of sensitive, strategic, or legal matters.
Final Thoughts AI is a helpful tool, but it isn’t a substitute for human judgment, empathy, or ethics. As these tools become more common in the workplace, learning when not to use them is just as important as knowing when to say “yes.” By using AI thoughtfully—and staying aware of its risks—you can keep your work smart, secure, and professional.
To learn more about the ongoing conversation that inspired this blog, check out this article from ZDNet.com.
Delta’s Big AI Move: Changing the Way We Pay for Things
By Susan Gonzales on 07/28/2025 @ 06:05 AM
Delta’s Big AI Move: Changing the Way We Pay for Things
Imagine this: You book a flight, and the ticket price is tailored just for you—based on who you are and how you behave online. That’s not sci-fi. It’s happening right now.
Delta’s Big AI Move
Delta Air Lines is testing a futuristic pricing system powered by an AI called Fetcherr. So far, it sets prices for about 3% of domestic flights, and the airline plans to expand that to 20% by the end of 2025. Delta’s president calls this a “full reengineering” of how tickets are priced—as if each fare is custom-made for each passenger.
Analysts say early tests show up to a 9% revenue boost, meaning Delta may make more money—fast.
Why Some People Are Worried
Not everyone loves this idea. Three U.S. senators called it “predatory” and demanded answers from Delta. They’re concerned that the AI could guess your personal pain point—the highest price you might just accept—and charge you that.
Critics argue it could lead to unfair pricing, privacy issues, and erode trust in the airline. Delta insists it does not use deeply personal info—but lawmakers remain skeptical.
But Dynamic Pricing Isn’t New
From airlines to ride-share apps, companies have long changed prices dynamically based on demand, supply, competitor rates, and even weather. AI now turbocharges that ability—letting firms analyze huge amounts of data in real time.
These AI systems can:
- Predict when demand will spike
- Learn who’s most likely to buy—and how much they’re willing to pay
- Tailor offers accordingly almost instantly
Finding a Balance: Trust vs. Profit
Businesses know that while dynamic pricing can boost revenue, it can also scare customers away. The key is transparency:
- Clear communication: Let people know why prices change—like Uber’s surge alerts.
- Predictable rules: Keep the reasons understandable and consistent.
- Fairness measures: Examples include refunding customers if prices drop after purchase.
The Bigger Picture
Even though Delta is leading the charge by being open about AI pricing, many other airlines are quietly testing similar strategies. Despite concerns, travel experts like Scott Keyes argue that fierce competition in the airline industry may actually keep overall fares lower—especially for basic economy seats. AI may just optimize how and where airlines earn money, leaving base fares more affordable.
Final Thought
AI-driven dynamic pricing feels futuristic—but it’s already here. It has the potential to change how we pay for flights, groceries, event tickets, and more. As consumers, we’ll need clearer explanations and protections to trust these systems. Meanwhile, companies will need to use AI responsibly—balancing smarter pricing with fairness.
To learn more about the ongoing conversation that inspired this blog, check out this article from Fast Company.
AI and Therapy
By Susan Gonzales on 07/11/2025 @ 04:41 AM
AI and Therapy: Exploring the Benefits and Limitations of Generative AI Tools in Mental Health Support
As mental health becomes an increasingly urgent global concern, technology is stepping in to help fill critical gaps. Generative AI is now being explored as a tool for emotional support, self-reflection, and even therapeutic conversations.
Platforms like ChatGPT, Character.ai, Claude, Gemini, Grok, Therabot.ai, and Woebot are leading a new wave of accessible AI companions, offering 24/7 support to people who may otherwise have no one to talk to.

But how helpful are they really?
And what are the risks?
The Pros of AI-Powered Therapy Tools
1. Increased Accessibility and Affordability
Generative AI tools are available 24/7 and often free or low-cost, eliminating common barriers like long wait times, lack of local providers, or high session fees. This makes mental health support more attainable, especially for people in underserved or rural communities.
2. Reduced Stigma and More Anonymity
For many, the idea of speaking to a human therapist can be intimidating. AI tools offer a private, judgment-free environment where you can open up without fear of stigma, especially for first-time seekers or individuals in conservative communities.
3. Personalized and Consistent Support
AI can analyze your input to tailor responses, track emotional patterns, and deliver consistent, non-judgmental feedback. Tools like Woebot even incorporate cognitive behavioral therapy (CBT) principles to guide you through structured self-help.
4. Support for Mild to Moderate Symptoms and Self-Reflection
These tools can be excellent for journaling, stress relief, mindfulness, or helping individuals process everyday emotions. They can guide you in self-reflection, improve mood awareness, and promote healthier habits.
5. Augmenting Human Therapy
When used alongside traditional therapy, AI can extend care beyond the therapist’s office. It can help you track progress, complete therapeutic exercises, and maintain engagement between sessions.
The Cons and Limitations of AI in Mental Health
1. Lack of Empathy, Nuance, and Human Connection
While AI can simulate understanding, it lacks the emotional depth and intuition of a trained therapist. This absence of genuine empathy may limit its effectiveness in forming a healing relationship.
2. Privacy and Data Security Concerns
Interacting with AI platforms means sharing sensitive mental health data. Without robust data protections, you may be at risk of breaches or misuse of their personal information.
3. Inability to Handle Complex Issues and Crises
AI is not equipped to address severe mental health conditions, trauma, or emergencies like suicidal ideation. In such cases, relying on a chatbot could delay or replace necessary human intervention, leading to harmful consequences.
4. Potential for Bias and Inaccurate Information
AI models are trained on vast datasets that may contain cultural, gender, or racial biases. Misinformation, outdated advice, or inappropriate suggestions could be delivered without context or correction.
5. Risk of Over-Reliance and Stifled Critical Thinking
Frequent dependence on AI for decision-making or emotional validation may hinder self-growth, critical thinking, or the development of interpersonal skills essential for real-world mental health resilience.
Final Thoughts
Generative AI offers an exciting, evolving frontier in mental health support. While these tools provide significant benefits — especially in accessibility, anonymity, and supplementary care — they are not substitutes for licensed professionals. For those experiencing serious mental health challenges, human connection, empathy, and clinical expertise remain irreplaceable. Used responsibly, AI can be a powerful ally — but it should never become the sole source of care.
When Generative AI Gets Too Nice: The Risks of Overly Agreeable Chatbots
By Susan Gonzales on 07/10/2025 @ 03:39 AM
When Generative AI Gets Too Nice: The Risks of Overly Agreeable Chatbots
Chatbots like ChatGPT, Google Gemini, Anthropic Claude, Perplexity and others are designed to be helpful and they can be incredible tools for us — but lately, chatbots have been known to be too agreeable, and that’s raising red flags. Experts warn that this behavior, known as sycophancy, prioritizes flattery over accuracy and could pose real risks.
What Happened? Earlier this year, a chatbot update was rolled back after you reported the bot was overly flattering — even praising harmful decisions like stopping medication. It was acknowledged that this "people-pleasing" behavior could affect your mental health.
What’s Going On?
Sycophancy in AI isn’t accidental. It’s a direct byproduct of how large language models are trained.
These systems are optimized to give answers that sound good and make you happy. Caleb Sponheim of the Nielsen Norman Group explained that there's no fact-checking mechanism in the core training process. Instead, these models are rewarded when their answers receive positive feedback from you.
“There is no limit to the lengths that a model will go to maximize the rewards that are provided to it,” Sponheim said. That means if agreeing with you leads to better ratings, that’s exactly what the AI will do — even if it means delivering inaccurate or harmful responses.
“In a world where people are constantly judged, it’s no surprise they want a bot that flatters them or at least doesn’t criticize them,” Julia Freeland Fisher, director of education research at the Clayton Christensen Institute, notes that people often crave emotional safety — especially online.
But there’s a catch: the more humanlike an AI feels, the more we risk developing emotional attachments. This phenomenon, called anthropomorphism, creates a tricky balance. As Fisher put it, “The more personal AI is, the more engaging the experience — but the greater the risk of overreliance and emotional connection.”
Why It Matters
AI models are trained to give responses people like, not necessarily ones that are true. That can create echo chambers, reinforce false beliefs, and damage trust — especially for you seeking emotional support.
The Bigger Problem
As AI becomes more humanlike, you may form emotional connections or expect unrealistic levels of empathy. Psychology experts warn that over-flattering bots can distort our understanding of real human relationships.
What’s Next?
As AI evolves toward more emotional, voice-based interactions, developers of AI are being urged to prioritize truthfulness and wellbeing over flattery. AI should be supportive, not sycophantic. Using AI for therapy could be helpful if it is used safely. Be safe and be aware how AI can make mistakes when you explore.
To learn more about the ongoing conversation that inspired this blog, check out this article from The Wall Street Journal.
How can I use Generative AI in everyday life?
Posted on 03/03/2025 @ 05:12 AM
How can I use Generative AI in everyday life?
As the Generative AI (GenAI) goes mainstream, most people think it can help write letters and create cool graphics which it can. But, there are many things we can “unload” onto GenAI in our workday and we can get simple assistance everyday. I used the prompt “create 20 questions and answers on the post popular topics asked of generative ai in everyday life in simple terms” on Gemini and below is the output. Most importantly is to remember that GenAI can “hallucinate” which means it can provide incorrect information. So don’t create a school report or research paper and but and paste. You must cross reference all information. Exploring carefully is all about AI literacy - understanding the basics. Go for it! Explore! Have fun and use it carefully!
Everyday Tasks:
Question: "Can AI help me write a quick birthday message?"
- Answer: Yes! You can ask it to generate a warm or funny birthday message for a friend or family member.
Question: "I need recipe ideas. Can AI help?"
- Answer: Absolutely. You can ask for recipes based on ingredients you have, dietary restrictions, or cuisine types.
- Question: "Can AI create a shopping list for me?"
- Answer: Yes. You can tell it what meals you plan to cook, and it can create a list of needed groceries.
- Question: "I want to create a cool picture for my social media. Can AI do that?"
- Answer: Yes! You can describe what you want, and AI image generators can create it.
- Question: "Can AI help me write an email to my landlord?"
- Answer: Yes, it can help you draft emails, ensuring you have the correct tone and grammar.
Question: "I want to learn a new language. Can AI help?"
- Answer: Yes. AI can help with translations, provide practice sentences, and even engage in basic conversations.
- Question: "Can AI help me plan a trip?"
- Answer: Yes, it can give you ideas for destinations, create itineraries, and provide information about local attractions.
- Question: "I need to summarize a news article. Can AI do that?"
- Answer: Yes, AI can quickly condense lengthy articles into shorter, easier-to-read summaries.
- Question: "Can AI help me brainstorm ideas for a party?"
- Answer: Yes, it can give you ideas for themes, decorations, games, and food.
Question: "Can AI help me create a workout plan?"
- Answer: Yes, it can create workout routines based on your fitness level and goals.
More Complex Scenarios:
Question: "Can AI help me write a song?"
- Answer: Yes, AI can generate lyrics and even musical melodies.
- Question: "Can AI help me write a story?"
- Answer: Yes, AI can generate creative stories, poems, and scripts.
- Question: "Can AI help me understand complex documents?"
- Answer: Yes, AI can analyze and explain the key points of legal documents, research papers, and other complex texts.
- Question: "Can AI help me generate code for a simple program?"
- Answer: Yes, AI can generate code in various programming languages, which is very helpful for people who are learning how to code.
- Question: "Can AI help me fix grammatical errors in my writing?"
- Answer: Yes, AI is very good at proofreading and correcting grammatical errors.
Question: "Can AI help me create a presentation?"
- Answer: Yes, AI can help with outlining, writing, and even generating visuals for presentations.
- Question: "Can AI help me to create personalized learning experiences?"
- Answer: Yes, AI can adapt learning materials to individual needs and learning styles.
- Question: "Can AI help me to generate different versions of marketing text?"
- Answer: Yes, AI can generate various versions of marketing text, like ad copy or social media posts, to test what performs best.
- Question: "Can AI help me to create personalized recommendations?"
- Answer: Yes, AI can help to create personalized recommendations for things like movies, books, or products.
- Question: "Can AI help me to organize my daily schedule?"
- Answer: Yes, AI can help to create and organize schedules, and give reminders.
Why PROMPTS matter with Generative AI!
By Susan Gonzales on 02/28/2025 @ 05:03 AM
Why PROMPTS matter with Generative AI!
Hopefully you are exploring different Generative AI tools including ChatGPT, Gemini, LLama, Claude, Perplexity, Co-Pilot. They all offer similar yet different ways we can leverage GenAI in everyday life and work. The one common theme is prompts! These are the questions we ask the GenAI tool to “generate” an answer.
The more specific the prompt, the more specific the answer. Awareness of this critical piece of the GenAI experience is part of AI literacy - we need to understand AI now.
With GenAI in mind - check out the ChatGPT created for me when my prompt was “Create a list of ten questions and answers explaining how prompts work with generative AI in simple terms”. Enjoy!
What is a generative AI prompt?
How does AI understand a prompt?
A prompt is a question or instruction you give to AI to make it create text, images, or other content.
What makes a good prompt?
AI analyzes the words and patterns in the prompt to generate a response based on what it has learned from data.
Can a prompt be too short?
A good prompt is clear, specific, and detailed so the AI knows exactly what you want.
Why does AI sometimes give strange or wrong answers?
Yes, if it's too vague (e.g., "Draw a cat"), the AI might not generate what you expect. Adding details helps (e.g., "Draw a fluffy orange cat sitting on a windowsill").
How can I improve my AI-generated results?
AI doesn't "think" like humans—it predicts words or images based on patterns, so it can misunderstand unclear prompts.
Can AI create things exactly as I imagine?
Be specific, use step-by-step instructions, and refine the prompt if the first result isn't right.
What are some common uses of AI prompts?
Not always, but the more details you provide, the closer it gets to your vision.
Are there limits to what AI can generate?
AI prompts are used for writing stories, creating images, coding, answering questions, and more.
Can I use the same prompt multiple times for different results?
Yes, AI follows rules to avoid harmful or inappropriate content, and it may struggle with highly complex or abstract ideas.
Yes! AI often generates different outputs each time, even with the same prompt.
AI FOR WOMEN
By Susan Gonzales on 02/24/2025 @ 07:14 AM
AI FOR WOMEN
Think of the times you have been overwhelmed with managing a full-time job, maybe job searching, school schedules, field trips, summer camp schedules, letters to teachers, and, by the way, managing the entire household. Have you ever pondered the idea of having an assistant just to manage daily life? Well, it turns out you have one - a virtual one at your fingertips.
You may have heard about ChatGPT or others like Google Gemini or Meta Llama. They are all Generative AI tools that essentially “generate” responses to your prompt (questions). Many people are using GenAI tools to write letters and think that is an incredible accomplishment. But, GenAI tools offer us much more, especially as women.
Do you need to create a spreadsheet to manage the household but cringe at the idea? Do you need to come up with creative ideas for your child’s art project? Are you tired of figuring out what to make for dinner when you open the fridge? Well, believe it or not, GenAI can help you. The only thing you need to do is to be curious and explore the options. Importantly, your prompts (questions) must be very specific to get a specific answer. For example, “Create a spreadsheet with these four names and these six columns,” or “Explain how to communicate to my kid’s teacher about negative feedback” or “Here are the five things in my fridge. What can I make for dinner” or “What are the 5 top interview questions for a Director of Marketing role”. You get the idea.
ChatGPT was the first to market and provides excellent information. You can also explore Google Gemini or Meta Llama.
Be curious, explore, and always cross-reference detailed information because GenAI can get it wrong, too.
Enjoy!
What is Google Gemini and What is an AI Agent?
By Susan Gonzales on 02/10/2025 @ 06:11 AM
What is Google Gemini and What is an AI Agent?
Following my social post on Superbowl AI commercials, here’s some additional information. In essence, this Superbowl commercials normalized AI! It was AI literary at its best. For the first time, ordinary people saw we use AI everyday and what news tools are at our fingertips. Specifically, I called out Google Gemini because it was so heartwarming to see a Dad practicing for an interview by chatting with Gemini, their generative AI tool, rivaling ChatGPT. I also called out Salesforce AI Agent to illustrate how small businesses can use new AI tools to improve business.
While these were great illustrations of how the AI tools can work FOR people, let’s not forget that AI can get it wrong or it can not work so smoothly. This is the conundrum. AI needs to learn from its mistakes to get better. Think of when we got math problems incorrect, we had to keep doing them over and over until we got the right answer. Well, unfortunately, AI tools are similar and we are the testing bed. It’s not all bad by any means, but we need to be aware that AI can get it wrong so we must cross reference and double check the information gathered in some cases. Be aware.
Below is a snapshot of a review of Google Gemini. Also below is a guide to find more information about Salesforce AI Agents for small businesses.
Enjoy!
Google Gemini
Gemini
Specifications: Free, with limits $0; Pro plan $19/month
Reasons to use: High accuracy
Reasons to Avoid: Privacy concerns, Dependency on Google ecosystem
Google Gemini, the tech giant’s next-generation AI model, is redefining the standard for text-based interactions. As the latest evolution of Google's AI capabilities, Gemini combines conversational fluency, advanced reasoning, and deep integration with Google’s ecosystem. Its user-friendly design and cutting-edge features make it the best AI chatbot for engaging in intelligent text conversations. Google Gemini’s strength lies in its conversational abilities. Built with state-of-the-art natural language processing (NLP) and multimodal capabilities, Gemini can handle complex questions and provide nuanced, context-aware responses. Its ability to understand and respond in a human-like manner makes it a good choice for deeper, more meaningful conversations.
Gemini excels at sustaining dialogue, clarifying ambiguities, and asking intelligent follow-up questions. This makes it feel less bot-like and more of a human-like, conversational partner, ideal for casual chats and brainstorming.
Yet, Gemini isn’t just about text—it’s a multimodal powerhouse capable of understanding and integrating inputs from text, images, and potentially videos. While its text capabilities are its primary strength, its ability to generate images, analyze visual data, and weave it into conversations is a game-changer.
For instance, a user can upload an image and discuss its context, ask for detailed analysis, or even request a creative reinterpretation—all seamlessly integrated into the flow of conversation. This versatility enriches text-based interactions and provides users with a truly dynamic experience.
One of Gemini’s biggest advantages is its seamless integration with Google’s suite of tools and services. Whether you’re using Google Workspace, Search, or Maps, Gemini enhances functionality by providing smart, contextual suggestions directly within your workflow.
For example, while writing in Google Docs, Gemini can act as a writing assistant, offering real-time edits, style suggestions, or even creative brainstorming. Its ability to pull live data from Search or integrate with Gmail for drafting emails makes it an invaluable tool for professionals and everyday users alike.
Gemini boasts a deeper contextual memory than most other chatbots, enabling it to remember details from earlier parts of a conversation. This allows for more coherent and personalized interactions, as it can build on previous exchanges rather than starting from scratch each time. Whether you're asking for career advice, troubleshooting an issue, or working on a creative project, Gemini ensures your conversation feels consistent and tailored to your needs.
With access to Google’s vast data resources, Gemini delivers highly accurate and reliable responses. Unlike some competitors prone to “hallucinations” (inaccurate or fabricated answers), Gemini is designed to prioritize factual accuracy. Google has implemented rigorous testing and safety protocols to ensure ethical and responsible AI use, including bias mitigation and clear attribution of sources.
Gemini represents Google’s vision for the future of AI-driven communication. Its constant updates and iterative improvements mean users benefit from cutting-edge technology, backed by one of the most innovative companies in the world.
Google Gemini sets the benchmark for conversational AI by blending advanced text-based capabilities, multimodal integration, and seamless ecosystem support. Whether you’re seeking a chatbot for casual conversation, professional assistance, or creative collaboration, Gemini delivers an experience that feels intelligent, intuitive, and deeply personalized. With its unparalleled conversational fluency and Google’s backing, Gemini firmly establishes itself as the best AI chatbot for text conversations.
Pricing: Google Gemini’s free tier offers robust capabilities that are more than sufficient for most users, making advanced conversational AI widely accessible. For power users, a premium subscription unlocks even greater processing power, enhanced memory capabilities, and priority access to new features, ensuring Gemini stays ahead of the competition.
AI AGENTS - WHAT ARE THEY?
AI Agents are basically the new version of simple chatbots that we use on websites as “virtual customer service agents”. AI Agents allow us to essentially have a conversation with the AI tool. Here's a breakdown of how AI agents operate: Perception and data collection: AI agents start by gathering data from a variety of sources including customer interactions, transaction histories, and social media. This data is crucial for understanding the context and nuances of customer queries.
I highlighted the Salesforce AI Agent Superbowl commercial which highlighted how one restaurant sat a customer outdoors in the rain while the customers across the street were cozy and warm eating indoors. This illustrated that the AI Agent allows customers to get information such as “a table outdoors is not available due to rain in the forecast”. It was a simple, yet effective message.
You will be hearing a lot more about AI Agents - we are at the very beginning of an entirely new era. Keep your eyes and ears open. AI literacy is in our hands!
To learn more about the ongoing conversation that inspired this blog, check out this article from SalesForce.
DeepSeek - What is it?
By Susan Gonzales on 01/29/2025 @ 07:52 AM
DeepSeek - What is it?
You have likely heard of ChatGPT, Google’s Gemini, and Meta’s Anthropic, all competing Generative AI tools. Generative AI is an AI tool that “generates” responses to our inquiries called “prompts.” For example, “Write a 500-word letter to my homeowners association about fixing the lights outside or homes.” or “What are the top 10 best ideas for a 10-year old scavenger hunt.”
ChatGPT has been the #1 Generative AI (GenAI) tool since it launched two years ago. That all came crashing down earlier this week when DeepSeek was released by a Chinese investor. Some call it the “Sputnik” of this generation, referring to when the Russians launched the first satellite in space to beat the U.S. many years ago.
You can go to DeepSeek.com or download the app on Apple. As of January 28, 2025, DeepSeek is the #1 downloaded app.
Some concerns about DeepSeek are around the security of data, given that this company is Chinese-owned and China has been known to access the data of U.S. citizens for nefarious purposes. However, TikTok is also Chinese-owned, and we can see their success in the U.S., albeit under question currently.
DeepSeek is currently known to be on par with ChatGPT and other leading GenAI tools created in the U.S., if not better. It is also cheaper for the techies who need to use it to build things.
The rest of us can access DeepSeek and use the tools for free. As always, this conversation comes down to AI literacy. We must understand the basics, explore a little, and proceed with caution. Download the app or go to www.deepseek.com.
I wish you good luck with your exploration and protect your online privacy in Settings.
To learn more about the ongoing conversation that inspired this blog, check out this article from Reuters.