Category: AI

  • Original Work or AI?

    Original Work or AI?

    Let me show you my roughy draft vs an AI review.

    Here’s a polished version of your essay with enhanced clarity, improved flow, and corrections for grammar and tone. Let me know if there are any specific stylistic preferences you’d like to emphasize.

    AI version

    There’s a lot of conversation right now about AI, plagiarism, and original work. People are asking important questions: What’s acceptable in today’s world? What role should AI play in content creation?

    As AI becomes more embedded in our lives, I think we’ll grow increasingly comfortable with the idea of computers proofreading, refining, and even rewriting much of what we produce. This progression feels natural. It started with simple spell checkers built into word processors, then grammar tools, and later programs like Grammarly, which helped rework entire sentences for clarity. Now, we’ve advanced to tools like ChatGPT, Copilot, and Gemini. With these, you can simply upload your document and ask the AI to “scrub” it—and it does. It can even restructure your ideas using your own words and concepts, delivering a cleaner, often better, version of your work.

    This represents a shift from fixing obvious errors to refining content without altering meaning. But it raises an important question: Is this plagiarism? Is the AI pulling ideas or phrases from elsewhere and passing them off as yours?

    If you ask an AI to generate something entirely on its own—say, a detailed financial advice column about mutual funds—that might tread into plagiarism, especially if you contributed little to the result. But if you write your own content and the AI simply refines it for clarity, I don’t believe that qualifies as plagiarism. To me, it’s just the next step in the evolution of word processing.

    Of course, there’s room for debate about tone, ownership, and how AI draws from existing data to create its outputs. But we live in a world of shared ideas. The real challenge arises when we can no longer distinguish between human-created and computer-generated content.

    Take voice replication as an example. Imagine uploading samples of your voice into a program, which can then create podcasts that sound like you—without you ever speaking a word. All you’d need to do is give it a prompt like, “Make a five-minute podcast on Ukraine, and emphasize support for their cause.” In moments, you’d have a polished podcast that sounds authentically like you.

    That’s where things get unsettling, especially when this technology could be misused. Someone could upload your voice, generate content, and make it seem like you said things you didn’t. If this sounds like a leap, let me remind you: technology advances quickly. Just two years ago, I would’ve thought this kind of capability was science fiction. Now, it’s reality, and it’s accelerating rapidly.

    As we approach advancements in chip technology and quantum computing, distinguishing human-generated content from AI-created material will only get harder.

    So, what can we do about it? That’s the million-dollar question.

    In the U.S., we could pass laws and regulations to manage AI use, but that wouldn’t stop other countries from pushing the boundaries of what AI can do. Cyber warfare is already a global concern, and AI is at its forefront.

    What’s critical is ensuring that AI is programmed responsibly. Developers must build guardrails into these systems to prevent them from causing harm. Isaac Asimov envisioned a future where computers followed rigid, binary rules, but our current AI is evolving into something much more nuanced.

    Ultimately, we have to adapt to the future we’re creating. We may not have all the answers yet, but we can strive to ensure this technology is used ethically, responsibly, and in a way that benefits humanity.

    Let me know if you’d like further tweaks or adjustments!

    My Version

    Please review and edit my essay that I’m planning on posting on my blog. Original work or is it AI So right now there is a lot of conversations going on with people about AI and plagiarism and original work. Lots of questions of what is acceptable in today’s world. I think as we get more and more embedded AI we’re gonna just find ourselves very comfortable with the idea that computers are going to proofread and rewrite a lot of the things that we produce. It’s just a natural progression of things. First that started off with spelling checking Capabilities built into word processors, and then they had grammar checkers and then we had programs like Grammarly that just went through and helped you rewrite sentences and now you don’t even have to go line by line through it. You can just throw it up into your AI like ChatGPT Copilot or Gemini and say hey scrub this document for me and it will. It’ll even rewrite it in a way that uses your words your ideas and everything that you were working on and create a whole different version of your document and you probably are gonna like it better. So that’s the nuance world is moving from where it fixes things that are obviously wrong to changing things without changing meaning. The question really is is it plagiarism? Is it pulling the information from somewhere else and uses somebody else’s words to rewrite your documentation your essay your vision. Now, if the propped is, hey, can you write a complete plan for me? That is a financial advice column and today we’re gonna talk about mutual funds and that is the only prompt that you give AI and it just goes And creates a detailed plan that you didn’t even have anything to do with. Yeah that might be plagiarism, but if it takes every sentence that you write and just restructures it to be clear, I don’t think that’s plagiarism. I think that’s just the next evolution of word processing. Now we can debate this back-and-forth about tone and where it gets the words from, but ultimately we live in a world of ideas. The bad part about AI if there is a bad part is where we are unable to distinguish between the information that is human mind created versus computer created content in totem. That means when you upload your voice, so then it can use your sounds to replicate the way that you talk yeah that’s gonna change the world again we’re gonna have podcasts that are created completely by AI. Someone’s gonna say I need a five minute podcast using my voice and today’s topic is Ukraine and I am totally for supporting Ukraine and all of a sudden you’re gonna get a 5 to 7 minute podcast that’s gonna talk all about that subject and you never said a word. That’s the scary part right And it’s even scarier if somebody can upload your voice to a computer and make it sound like you said those things. I know this seems like a leap Judd you’re crazy but if you read my blog two years ago, I would’ve thought that nothing like this could possibly have happened But it’s happening and it’s happening extremely fast and it has chip technology moves, faster, and faster and we get into quantum computing. I see the challenges of understanding the difference between human and computer, derived content to become more and more difficult to distinguish. So what can we do about it right? That’s the big million dollar question, I don’t know that there is anything we can do about it. So in the United States, we can pass laws and we can do all sorts of things, but that will will not stop other countries from forging ahead to create this capability. We have to worry about cyber warfare and AI is at the forefront of it. There’s a lot of considerations going on in the world today. I think that we just have to ask programmers to program AI responsibly, and ensure that there are some guide rails protections built-in to prevent AI from going off the rails as it were and doing things that can harm humans. And while Isaac Amoth had some great vision, I think that the binary thinking of computers in the past is going to be completely replaced with a more nuance approach for the future. And that’s how we’re going to have to live with the future that we’ve created today.