Skills

AI Tools & Platforms

  • ChatGPT (OpenAI)

  • Gemini (Google)

  • Microsoft Copilot

  • Perplexity AI

  • Claude (Anthropic)

  • Le Chat (Mistral AI)

Prompt Example:

Review the documentation and follow these 2 assessments:
1) Ensure that the section on "Structural Framework" isn't redundant or encompassed in the other sections. If it is redundant to the point of it needing to be cut, then assess in what other sections could "Structural Framework" be worked in through rewording the other section to ensure the concept of Structural Framework in encompassed.
2) If Assessment 1 is not satisfied and there isn't a duplication. Assess if there is another section that is conceptually weaker than Structural Framework. If there is, we can replace that section with Structural Framework.

Prompt Example:

Review the documentation and follow these 2 assessments:
1) Ensure that the section on "Structural Framework" isn't redundant or encompassed in the other sections. If it is redundant to the point of it needing to be cut, then assess in what other sections could "Structural Framework" be worked in through rewording the other section to ensure the concept of Structural Framework in encompassed.
2) If Assessment 1 is not satisfied and there isn't a duplication. Assess if there is another section that is conceptually weaker than Structural Framework. If there is, we can replace that section with Structural Framework.

Prompt Example:

Review the documentation and follow these 2 assessments:
1) Ensure that the section on "Structural Framework" isn't redundant or encompassed in the other sections. If it is redundant to the point of it needing to be cut, then assess in what other sections could "Structural Framework" be worked in through rewording the other section to ensure the concept of Structural Framework in encompassed.
2) If Assessment 1 is not satisfied and there isn't a duplication. Assess if there is another section that is conceptually weaker than Structural Framework. If there is, we can replace that section with Structural Framework.

Instruction-Tuned Prompt Design & Optimization

Collaboration Over Delegation

I design prompts as collaborative blueprints, not just commands. I treat AI as a partner rather than a passive tool. Every interaction is about guiding with clarity and purpose to create something smarter together, where the process matters just as much as the outcome. The goal isn't to outsource the solutions. It's to orchestrate them.

Natural Language Optimization

Clarity Molds Consistency

Effective communication is essential when working with LLMs. I craft prompts that define the task with precision by providing relevant context, clearly stated goals, and intentional language choices to ensure the model interprets instructions exactly as intended. Effective communication is more than just knowing what to say. It's about ensuring that nothing can be misinterpreted.

Prompt Example:

Generate a line of CSS code for the website I'm building in Visual Studio Code. I'll be adding this to the pages.tsx portion of my code. This code will be for hovering animation for my home button. I need to ensure that the animation is smooth and consistent so there should be an "activation" animation for when the mouse first hovers over, and a "deactivation" animation for when the mouse moves away.

Prompt Example:

Generate a line of CSS code for the website I'm building in Visual Studio Code. I'll be adding this to the pages.tsx portion of my code. This code will be for hovering animation for my home button. I need to ensure that the animation is smooth and consistent so there should be an "activation" animation for when the mouse first hovers over, and a "deactivation" animation for when the mouse moves away.

Prompt Example:

Generate a line of CSS code for the website I'm building in Visual Studio Code. I'll be adding this to the pages.tsx portion of my code. This code will be for hovering animation for my home button. I need to ensure that the animation is smooth and consistent so there should be an "activation" animation for when the mouse first hovers over, and a "deactivation" animation for when the mouse moves away.

Prompt Example:

Review this attached PDF of technical documentation I've written up. Look for the following:
1) Ensure that the language is concise and easy to understand.

2) Look for redundancies and repetition of instructions that can be trimmed down.

3) Look for spelling errors that need correction.

4) Look over the instructions themselves. Based on what the instructions are trying to accomplish, ensure that they are accurate to the real solutions.

Prompt Example:

Review this attached PDF of technical documentation I've written up. Look for the following:
1) Ensure that the language is concise and easy to understand.

2) Look for redundancies and repetition of instructions that can be trimmed down.

3) Look for spelling errors that need correction.

4) Look over the instructions themselves. Based on what the instructions are trying to accomplish, ensure that they are accurate to the real solutions.

Prompt Example:

Review this attached PDF of technical documentation I've written up. Look for the following:
1) Ensure that the language is concise and easy to understand.

2) Look for redundancies and repetition of instructions that can be trimmed down.

3) Look for spelling errors that need correction.

4) Look over the instructions themselves. Based on what the instructions are trying to accomplish, ensure that they are accurate to the real solutions.

Prompt Architecture & Response Control

Structured Inputs Control Outputs

I create prompts with LLMs' pattern-matching in mind to prevent drift across responses and ensure repeatable results across use cases. I design structured scaffolding for the AI to follow, tuning the outcomes through modular prompts that provide clarity and prevent hallucinations. This gives the LLM more structure and allows it to effectively navigate the instructions more accurately.

Model Behavior Debugging

Feedback Drives Refinement

Even with clear and concise communication, AI can still make mistakes. That is why it is important to not only identify when things go wrong, but also how to respond to them. I establish a base-level understanding of what an accurate response should look like so I can recognize when a result is off track. When that happens, I read back through previous prompts to find errors on my end, clarify the structure of the intended response, and list where the result went wrong to guide a more accurate follow-up response.

Prompt Example:

Let's draft an email together. We will focus on wording and tone. The email will be professional, but also personable. I will send you a draft I've already written up. Check the tone to make sure it matches that professional, yet personable tone. If there are any alternate wording suggestions, mention them separately so that I can assess them before including them.

Prompt Example:

Let's draft an email together. We will focus on wording and tone. The email will be professional, but also personable. I will send you a draft I've already written up. Check the tone to make sure it matches that professional, yet personable tone. If there are any alternate wording suggestions, mention them separately so that I can assess them before including them.

Prompt Example:

Let's draft an email together. We will focus on wording and tone. The email will be professional, but also personable. I will send you a draft I've already written up. Check the tone to make sure it matches that professional, yet personable tone. If there are any alternate wording suggestions, mention them separately so that I can assess them before including them.

Prompt Example:

Review the documentation and follow these 2 assessments:
1) Ensure that the section on "Structural Framework" isn't redundant or encompassed in the other sections. If it is redundant to the point of it needing to be cut, then assess in what other sections could "Structural Framework" be worked in through rewording the other section to ensure the concept of Structural Framework in encompassed.
2) If Assessment 1 is not satisfied and there isn't a duplication. Assess if there is another section that is conceptually weaker than Structural Framework. If there is, we can replace that section with Structural Framework.

Prompt Example:

Review the documentation and follow these 2 assessments:
1) Ensure that the section on "Structural Framework" isn't redundant or encompassed in the other sections. If it is redundant to the point of it needing to be cut, then assess in what other sections could "Structural Framework" be worked in through rewording the other section to ensure the concept of Structural Framework in encompassed.
2) If Assessment 1 is not satisfied and there isn't a duplication. Assess if there is another section that is conceptually weaker than Structural Framework. If there is, we can replace that section with Structural Framework.

Prompt Example:

Review the documentation and follow these 2 assessments:
1) Ensure that the section on "Structural Framework" isn't redundant or encompassed in the other sections. If it is redundant to the point of it needing to be cut, then assess in what other sections could "Structural Framework" be worked in through rewording the other section to ensure the concept of Structural Framework in encompassed.
2) If Assessment 1 is not satisfied and there isn't a duplication. Assess if there is another section that is conceptually weaker than Structural Framework. If there is, we can replace that section with Structural Framework.

Systems Thinking

System-Conscious Architecture

I build prompts with an understanding of how LLMs process information and respond to structure, sequence, and context. Rather than treating each prompt as a standalone instruction, I design structured systems that guide the model through tasks with consistency and control. With each prompt, I observe how formatting, specificity, and prior inputs influence behavior, adjusting the architecture to reduce drift, prevent hallucinations, and maintain clarity across responses. The result is a prompt system that adapts with the model rather than reacting to it.

Certifications & Courses

Blockchain Council: Prompt Engineering Certification

(In Progress)

Google Cloud: Introduction to Generative AI

(Completed)

ChatGPT Prompt Engineering for Developers from DeepLearning.AI

(Completed)

  • In Course Python Test Prompt:

    def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = openai.ChatCompletion.create(
    model=model,
    messages=messages,
    temperature=0
    )
    return response.choices[0].message["content"
    text = f"""
    you should express what you want a model to do by \
    providing instructions that are as clear and \
    specific as you can possiblt make them. \
    This will guide the model towards the desired output, \
    and reduce the chances of receiving irrelevant \
    or incorrect responses. Don't confuse writing a \
    clear prompt with writing a short prompt. \
    In many cases, longer prompts provide more clarity \
    and context for the model, which can lead to \
    more detailed and relevant outputs.
    """
    prompt = f"""
    Summarize the text delimited by triple backticks \
    into a single sentence.
    """
    response = get_completion(prompt)
    print(response)

  • In Course Python Test Prompt:

    def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = openai.ChatCompletion.create(
    model=model,
    messages=messages,
    temperature=0
    )
    return response.choices[0].message["content"
    text = f"""
    you should express what you want a model to do by \
    providing instructions that are as clear and \
    specific as you can possiblt make them. \
    This will guide the model towards the desired output, \
    and reduce the chances of receiving irrelevant \
    or incorrect responses. Don't confuse writing a \
    clear prompt with writing a short prompt. \
    In many cases, longer prompts provide more clarity \
    and context for the model, which can lead to \
    more detailed and relevant outputs.
    """
    prompt = f"""
    Summarize the text delimited by triple backticks \
    into a single sentence.
    """
    response = get_completion(prompt)
    print(response)

  • In Course Python Test Prompt:

    def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = openai.ChatCompletion.create(
    model=model,
    messages=messages,
    temperature=0
    )
    return response.choices[0].message["content"
    text = f"""
    you should express what you want a model to do by \
    providing instructions that are as clear and \
    specific as you can possiblt make them. \
    This will guide the model towards the desired output, \
    and reduce the chances of receiving irrelevant \
    or incorrect responses. Don't confuse writing a \
    clear prompt with writing a short prompt. \
    In many cases, longer prompts provide more clarity \
    and context for the model, which can lead to \
    more detailed and relevant outputs.
    """
    prompt = f"""
    Summarize the text delimited by triple backticks \
    into a single sentence.
    """
    response = get_completion(prompt)
    print(response)

  • In Course Python Test Prompt:

    def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = openai.ChatCompletion.create(
    model=model,
    messages=messages,
    temperature=0
    )
    return response.choices[0].message["content"
    text = f"""
    you should express what you want a model to do by \
    providing instructions that are as clear and \
    specific as you can possiblt make them. \
    This will guide the model towards the desired output, \
    and reduce the chances of receiving irrelevant \
    or incorrect responses. Don't confuse writing a \
    clear prompt with writing a short prompt. \
    In many cases, longer prompts provide more clarity \
    and context for the model, which can lead to \
    more detailed and relevant outputs.
    """
    prompt = f"""
    Summarize the text delimited by triple backticks \
    into a single sentence.
    """
    response = get_completion(prompt)
    print(response)

© All right reserved

© All right reserved