Skip to content

Task Module

openaivec.task

Pre-configured task library for OpenAI API structured outputs.

This module provides a comprehensive collection of pre-configured tasks designed for various business and academic use cases. Tasks are organized into domain-specific submodules, each containing ready-to-use PreparedTask instances that work seamlessly with openaivec's batch processing capabilities.

Available Task Domains

Natural Language Processing (nlp)

Core NLP tasks for text analysis and processing:

  • Translation: Multi-language translation with 40+ language support
  • Sentiment Analysis: Emotion detection and sentiment scoring
  • Named Entity Recognition: Extract people, organizations, locations
  • Morphological Analysis: Part-of-speech tagging and lemmatization
  • Dependency Parsing: Syntactic structure analysis
  • Keyword Extraction: Important term identification
Customer Support (customer_support)

Specialized tasks for customer service operations:

  • Intent Analysis: Understand customer goals and requirements
  • Sentiment Analysis: Customer satisfaction and emotional state
  • Urgency Analysis: Priority assessment and response time recommendations
  • Inquiry Classification: Automatic categorization and routing
  • Inquiry Summary: Comprehensive issue summarization
  • Response Suggestion: AI-powered response drafting

Usage Patterns

Quick Start with Default Tasks
from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import nlp, customer_support

client = OpenAI()

# Use pre-configured tasks
sentiment_analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=nlp.SENTIMENT_ANALYSIS
)

intent_analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=customer_support.INTENT_ANALYSIS
)
Customized Task Configuration
from openaivec.task.customer_support import urgency_analysis

# Create customized urgency analysis
custom_urgency = urgency_analysis(
    business_context="SaaS platform support",
    urgency_levels={
        "critical": "Service outages, security breaches",
        "high": "Login issues, payment failures",
        "medium": "Feature bugs, billing questions",
        "low": "Feature requests, general feedback"
    }
)

analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=custom_urgency
)
Pandas Integration
import pandas as pd
from openaivec import pandas_ext

df = pd.DataFrame({"text": ["I love this!", "This is terrible."]})

# Apply tasks directly to DataFrame columns
df["sentiment"] = df["text"].ai.task(nlp.SENTIMENT_ANALYSIS)
df["intent"] = df["text"].ai.task(customer_support.INTENT_ANALYSIS)

# Extract structured results
results_df = df.ai.extract("sentiment")
Spark Integration
from openaivec.spark import ResponsesUDFBuilder

# Register UDF for large-scale processing
spark.udf.register(
    "analyze_sentiment",
    ResponsesUDFBuilder.of_openai(
        api_key=api_key,
        model_name="gpt-4.1-mini"
    ).build_from_task(task=nlp.SENTIMENT_ANALYSIS)
)

# Use in Spark SQL
df = spark.sql("""
    SELECT text, analyze_sentiment(text) as sentiment
    FROM customer_feedback
""")

Task Architecture

PreparedTask Structure

All tasks are built using the PreparedTask dataclass:

@dataclass(frozen=True)
class PreparedTask:
    instructions: str           # Detailed prompt for the LLM
    response_format: type[ResponseFormat]    # Pydantic model or str for structured/plain output
    temperature: float = 0.0    # Sampling temperature
    top_p: float = 1.0         # Nucleus sampling parameter
Response Format Standards
  • Literal Types: Categorical fields use typing.Literal for type safety
  • Multilingual: Non-categorical fields respond in input language
  • Validation: Pydantic models ensure data integrity
  • Spark Compatible: All types map correctly to Spark schemas
Design Principles
  1. Consistency: Uniform API across all task domains
  2. Configurability: Customizable parameters for different use cases
  3. Type Safety: Strong typing with Pydantic validation
  4. Scalability: Optimized for batch processing and large datasets
  5. Extensibility: Easy to add new domains and tasks

Adding New Task Domains

To add a new domain (e.g., finance, healthcare, legal):

  1. Create Domain Module: src/openaivec/task/new_domain/
  2. Implement Tasks: Following existing patterns with Pydantic models
  3. Add Multilingual Support: Include language-aware instructions
  4. Export Functions: Both configurable functions and constants
  5. Update Documentation: Add to this module docstring
Example New Domain Structure
src/openaivec/task/finance/
├── __init__.py              # Export all functions and constants
├── risk_assessment.py       # Credit risk, market risk analysis
├── document_analysis.py     # Financial document processing
└── compliance_check.py      # Regulatory compliance verification

Performance Considerations

  • Batch Processing: Use BatchResponses for multiple inputs
  • Deduplication: Automatic duplicate removal reduces API costs
  • Caching: Results are cached based on input content
  • Async Support: AsyncBatchResponses for concurrent processing
  • Token Optimization: Vectorized system messages for efficiency

Best Practices

  1. Choose Appropriate Models:
  2. gpt-4.1-mini: Fast, cost-effective for most tasks
  3. gpt-4o: Higher accuracy for complex analysis

  4. Customize When Needed:

  5. Use default tasks for quick prototyping
  6. Configure custom tasks for production use

  7. Handle Multilingual Input:

  8. Tasks automatically detect and respond in input language
  9. Categorical fields remain in English for system compatibility

  10. Monitor Performance:

  11. Use batch sizes appropriate for your use case
  12. Monitor token usage for cost optimization

See individual task modules for detailed documentation and examples.

Modules

customer_support

Modules
customer_sentiment

Customer sentiment analysis task for support interactions.

This module provides a predefined task for analyzing customer sentiment specifically in support contexts, including satisfaction levels and emotional states that affect customer experience and support strategy.

Example

Basic usage with BatchResponses:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import customer_support

client = OpenAI()
analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=customer_support.CUSTOMER_SENTIMENT
)

inquiries = [
    "I'm really disappointed with your service. This is the third time I've had this issue.",
    "Thank you so much for your help! You've been incredibly patient.",
    "I need to cancel my subscription. It's not working for me."
]
sentiments = analyzer.parse(inquiries)

for sentiment in sentiments:
    print(f"Sentiment: {sentiment.sentiment}")
    print(f"Satisfaction: {sentiment.satisfaction_level}")
    print(f"Churn Risk: {sentiment.churn_risk}")
    print(f"Emotional State: {sentiment.emotional_state}")

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import customer_support

df = pd.DataFrame({"inquiry": [
    "I'm really disappointed with your service. This is the third time I've had this issue.",
    "Thank you so much for your help! You've been incredibly patient.",
    "I need to cancel my subscription. It's not working for me."
]})
df["sentiment"] = df["inquiry"].ai.task(customer_support.CUSTOMER_SENTIMENT)

# Extract sentiment components
extracted_df = df.ai.extract("sentiment")
print(extracted_df[[
    "inquiry", "sentiment_satisfaction_level",
    "sentiment_churn_risk", "sentiment_emotional_state"
]])

Attributes:

Name Type Description
CUSTOMER_SENTIMENT PreparedTask

A prepared task instance configured for customer sentiment analysis with temperature=0.0 and top_p=1.0 for deterministic output.

Classes Functions
customer_sentiment
customer_sentiment(
    business_context: str = "general customer support",
    **api_kwargs,
) -> PreparedTask

Create a configurable customer sentiment analysis task.

Parameters:

Name Type Description Default
business_context str

Business context for sentiment analysis.

'general customer support'
**api_kwargs

Additional OpenAI API parameters (temperature, top_p, etc.).

{}

Returns:

Type Description
PreparedTask

PreparedTask configured for customer sentiment analysis.

Source code in src/openaivec/task/customer_support/customer_sentiment.py
def customer_sentiment(business_context: str = "general customer support", **api_kwargs) -> PreparedTask:
    """Create a configurable customer sentiment analysis task.

    Args:
        business_context (str): Business context for sentiment analysis.
        **api_kwargs: Additional OpenAI API parameters (temperature, top_p, etc.).

    Returns:
        PreparedTask configured for customer sentiment analysis.
    """

    instructions = f"""Analyze customer sentiment in the context of support interactions, focusing on
satisfaction, emotional state, and business implications.

Business Context: {business_context}

Sentiment Categories:
- positive: Customer is happy, satisfied, or grateful
- negative: Customer is unhappy, frustrated, or disappointed
- neutral: Customer is matter-of-fact, without strong emotions
- mixed: Customer expresses both positive and negative sentiments

Satisfaction Levels:
- very_satisfied: Extremely happy, praising service, expressing gratitude
- satisfied: Content, appreciative, positive feedback
- neutral: Neither satisfied nor dissatisfied, factual communication
- dissatisfied: Unhappy, expressing concerns, mild complaints
- very_dissatisfied: Extremely unhappy, angry, threatening to leave

Emotional States:
- happy: Cheerful, pleased, content
- frustrated: Annoyed, impatient, struggling with issues
- angry: Hostile, aggressive, demanding immediate action
- disappointed: Let down, expectations not met
- confused: Lost, needing clarification, overwhelmed
- grateful: Thankful, appreciative of help received
- worried: Anxious, concerned about outcomes

Churn Risk Assessment:
- low: Happy customers, positive experience
- medium: Neutral customers, some concerns but manageable
- high: Dissatisfied customers, multiple issues, expressing frustration
- critical: Extremely unhappy, threatening to cancel, demanding escalation

Relationship Status:
- new: First-time contact, tentative, learning
- loyal: Long-term customer, familiar with service
- at_risk: Showing signs of dissatisfaction, needs attention
- detractor: Actively unhappy, may spread negative feedback
- advocate: Extremely satisfied, promotes service to others

Response Approach:
- empathetic: Use compassionate language, acknowledge feelings
- professional: Maintain formal, solution-oriented communication
- solution_focused: Directly address problems, provide clear next steps
- escalation_required: Immediately involve management or specialists

Analyze tone indicators like:
- Positive: "thank you", "great", "helpful", "love", "excellent"
- Negative: "terrible", "disappointed", "frustrated", "awful", "horrible"
- Urgency: "urgent", "immediately", "ASAP", "critical"
- Threat: "cancel", "switch", "competitor", "lawyer", "report"

IMPORTANT: Provide analysis responses in the same language as the input text, except for the
predefined categorical fields (sentiment, satisfaction_level, emotional_state, churn_risk,
relationship_status, response_approach) which must use the exact English values specified above.
For example, if the input is in Spanish, provide tone_indicators in Spanish, but use English
values like "positive" for sentiment.

Provide comprehensive sentiment analysis with business context and recommended response strategy."""

    return PreparedTask(instructions=instructions, response_format=CustomerSentiment, api_kwargs=api_kwargs)
inquiry_classification

Inquiry classification task for customer support.

This module provides a configurable task for classifying customer inquiries into different categories to help route them to the appropriate support team.

Example

Basic usage with default settings:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import customer_support

client = OpenAI()
classifier = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=customer_support.inquiry_classification()
)

inquiries = [
    "I can't log into my account",
    "When will my order arrive?",
    "I want to cancel my subscription"
]
classifications = classifier.parse(inquiries)

for classification in classifications:
    print(f"Category: {classification.category}")
    print(f"Subcategory: {classification.subcategory}")
    print(f"Confidence: {classification.confidence}")
    print(f"Routing: {classification.routing}")

Customized for e-commerce:

from openaivec.task import customer_support

# E-commerce specific categories
ecommerce_categories = {
    "order_management": ["order_status", "order_cancellation", "order_modification", "returns"],
    "payment": ["payment_failed", "refund_request", "payment_methods", "billing_inquiry"],
    "product": ["product_info", "size_guide", "availability", "recommendations"],
    "shipping": ["delivery_status", "shipping_cost", "delivery_options", "tracking"],
    "account": ["login_issues", "account_settings", "profile_updates", "password_reset"],
    "general": ["complaints", "compliments", "feedback", "other"]
}

ecommerce_routing = {
    "order_management": "order_team",
    "payment": "billing_team",
    "product": "product_team",
    "shipping": "logistics_team",
    "account": "account_support",
    "general": "general_support"
}

task = customer_support.inquiry_classification(
    categories=ecommerce_categories,
    routing_rules=ecommerce_routing,
    business_context="e-commerce platform"
)

classifier = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=task
)

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import customer_support

df = pd.DataFrame({"inquiry": [
    "I can't log into my account",
    "When will my order arrive?",
    "I want to cancel my subscription"
]})
df["classification"] = df["inquiry"].ai.task(customer_support.inquiry_classification())

# Extract classification components
extracted_df = df.ai.extract("classification")
print(extracted_df[[
    "inquiry", "classification_category",
    "classification_subcategory", "classification_confidence"
]])
Classes Functions
inquiry_classification
inquiry_classification(
    categories: Dict[str, list[str]] | None = None,
    routing_rules: Dict[str, str] | None = None,
    priority_rules: Dict[str, str] | None = None,
    business_context: str = "general customer support",
    custom_keywords: Dict[str, list[str]] | None = None,
    **api_kwargs,
) -> PreparedTask

Create a configurable inquiry classification task.

Parameters:

Name Type Description Default
categories dict[str, list[str]] | None

Dictionary mapping category names to lists of subcategories. Default provides standard support categories.

None
routing_rules dict[str, str] | None

Dictionary mapping categories to routing destinations. Default provides standard routing options.

None
priority_rules dict[str, str] | None

Dictionary mapping keywords/patterns to priority levels. Default uses standard priority indicators.

None
business_context str

Description of the business context to help with classification.

'general customer support'
custom_keywords dict[str, list[str]] | None

Dictionary mapping categories to relevant keywords.

None
**api_kwargs

Additional keyword arguments to pass to the OpenAI API, such as temperature, top_p, etc.

{}

Returns:

Type Description
PreparedTask

PreparedTask configured for inquiry classification.

Source code in src/openaivec/task/customer_support/inquiry_classification.py
def inquiry_classification(
    categories: Dict[str, list[str]] | None = None,
    routing_rules: Dict[str, str] | None = None,
    priority_rules: Dict[str, str] | None = None,
    business_context: str = "general customer support",
    custom_keywords: Dict[str, list[str]] | None = None,
    **api_kwargs,
) -> PreparedTask:
    """Create a configurable inquiry classification task.

    Args:
        categories (dict[str, list[str]] | None): Dictionary mapping category names to lists of subcategories.
            Default provides standard support categories.
        routing_rules (dict[str, str] | None): Dictionary mapping categories to routing destinations.
            Default provides standard routing options.
        priority_rules (dict[str, str] | None): Dictionary mapping keywords/patterns to priority levels.
            Default uses standard priority indicators.
        business_context (str): Description of the business context to help with classification.
        custom_keywords (dict[str, list[str]] | None): Dictionary mapping categories to relevant keywords.
        **api_kwargs: Additional keyword arguments to pass to the OpenAI API,
            such as temperature, top_p, etc.

    Returns:
        PreparedTask configured for inquiry classification.
    """

    # Default categories
    if categories is None:
        categories = {
            "technical": [
                "login_issues",
                "password_reset",
                "app_crashes",
                "connectivity_problems",
                "feature_not_working",
            ],
            "billing": [
                "payment_failed",
                "invoice_questions",
                "refund_request",
                "pricing_inquiry",
                "subscription_changes",
            ],
            "product": [
                "feature_request",
                "product_information",
                "compatibility_questions",
                "how_to_use",
                "bug_reports",
            ],
            "shipping": [
                "delivery_status",
                "shipping_address",
                "delivery_issues",
                "tracking_number",
                "expedited_shipping",
            ],
            "account": ["account_creation", "profile_updates", "account_deletion", "data_export", "privacy_settings"],
            "general": ["compliments", "complaints", "feedback", "partnership_inquiry", "other"],
        }

    # Default routing rules
    if routing_rules is None:
        routing_rules = {
            "technical": "tech_support",
            "billing": "billing_team",
            "product": "product_team",
            "shipping": "shipping_team",
            "account": "account_management",
            "general": "general_support",
        }

    # Default priority rules
    if priority_rules is None:
        priority_rules = {
            "urgent": "urgent, emergency, critical, down, outage, security, breach, immediate",
            "high": "login, password, payment, billing, delivery, problem, issue, error, bug",
            "medium": "feature, request, question, how, help, support, feedback",
            "low": "information, compliment, thank, suggestion, general, other",
        }

    # Build categories section
    categories_text = "Categories and subcategories:\n"
    for category, subcategories in categories.items():
        categories_text += f"- {category}: {', '.join(subcategories)}\n"

    # Build routing section
    routing_text = "Routing options:\n"
    for category, routing in routing_rules.items():
        routing_text += f"- {routing}: {category.replace('_', ' ').title()} issues\n"

    # Build priority section
    priority_text = "Priority levels:\n"
    for priority, keywords in priority_rules.items():
        priority_text += f"- {priority}: {keywords}\n"

    # Build custom keywords section
    keywords_text = ""
    if custom_keywords:
        keywords_text = "\nCustom keywords for classification:\n"
        for category, keywords in custom_keywords.items():
            keywords_text += f"- {category}: {', '.join(keywords)}\n"

    instructions = f"""Classify the customer inquiry into the appropriate category and subcategory
based on the configured categories and business context.

Business Context: {business_context}

{categories_text}

{routing_text}

{priority_text}

{keywords_text}

Instructions:
1. Analyze the inquiry in the context of: {business_context}
2. Classify into the most appropriate category and subcategory
3. Provide confidence score based on clarity of the inquiry
4. Suggest routing based on the configured rules
5. Extract relevant keywords that influenced the classification
6. Assign priority level based on content and urgency indicators
7. Indicate whether the inquiry matches the business context

Consider:
- Explicit keywords and phrases
- Implied intent and context
- Emotional tone and urgency
- Technical complexity
- Business impact
- Customer type indicators

IMPORTANT: Provide analysis responses in the same language as the input text, except for the
predefined categorical fields (priority) which must use the exact English values specified above.
Category, subcategory, routing, and keywords should reflect the content and can be in the input
language where appropriate, but priority must use English values like "high".

Provide accurate classification with detailed reasoning."""

    return PreparedTask(instructions=instructions, response_format=InquiryClassification, api_kwargs=api_kwargs)
inquiry_summary

Inquiry summary task for customer support interactions.

This module provides a predefined task for summarizing customer inquiries, extracting key information, and creating concise summaries for support agents and management reporting.

Example

Basic usage with BatchResponses:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import customer_support

client = OpenAI()
summarizer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=customer_support.INQUIRY_SUMMARY
)

inquiries = [
    '''Hi there, I've been having trouble with my account for the past week.
    Every time I try to log in, it says my password is incorrect, but I'm sure
    it's right. I tried resetting it twice but the email never arrives.
    I'm getting really frustrated because I need to access my files for work tomorrow.''',

    '''I love your product! It's been incredibly helpful for my team.
    However, I was wondering if there's any way to get more storage space?
    We're running out and would like to upgrade our plan.'''
]
summaries = summarizer.parse(inquiries)

for summary in summaries:
    print(f"Summary: {summary.summary}")
    print(f"Issue: {summary.main_issue}")
    print(f"Actions Taken: {summary.actions_taken}")
    print(f"Resolution Status: {summary.resolution_status}")

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import customer_support

df = pd.DataFrame({"inquiry": [long_inquiry_text]})
df["summary"] = df["inquiry"].ai.task(customer_support.INQUIRY_SUMMARY)

# Extract summary components
extracted_df = df.ai.extract("summary")
print(extracted_df[["inquiry", "summary_main_issue", "summary_resolution_status"]])

Attributes:

Name Type Description
INQUIRY_SUMMARY PreparedTask

A prepared task instance configured for inquiry summarization with temperature=0.0 and top_p=1.0 for deterministic output.

Classes Functions
inquiry_summary
inquiry_summary(
    summary_length: str = "concise",
    business_context: str = "general customer support",
    **api_kwargs,
) -> PreparedTask

Create a configurable inquiry summary task.

Parameters:

Name Type Description Default
summary_length str

Length of summary (concise, detailed, bullet_points).

'concise'
business_context str

Business context for summary.

'general customer support'
**api_kwargs

Additional keyword arguments to pass to the OpenAI API, such as temperature, top_p, etc.

{}

Returns:

Type Description
PreparedTask

PreparedTask configured for inquiry summarization.

Source code in src/openaivec/task/customer_support/inquiry_summary.py
def inquiry_summary(
    summary_length: str = "concise",
    business_context: str = "general customer support",
    **api_kwargs,
) -> PreparedTask:
    """Create a configurable inquiry summary task.

    Args:
        summary_length (str): Length of summary (concise, detailed, bullet_points).
        business_context (str): Business context for summary.
        **api_kwargs: Additional keyword arguments to pass to the OpenAI API,
            such as temperature, top_p, etc.

    Returns:
        PreparedTask configured for inquiry summarization.
    """

    length_instructions = {
        "concise": "Write a concise 2-3 sentence summary that captures the essence of the inquiry",
        "detailed": "Write a detailed 4-6 sentence summary that includes comprehensive context",
        "bullet_points": "Create a bullet-point summary with key facts and actions",
    }

    instructions = f"""Create a comprehensive summary of the customer inquiry that captures all
essential information for support agents and management.

Business Context: {business_context}
Summary Style: {length_instructions.get(summary_length, length_instructions["concise"])}

Summary Guidelines:
1. {length_instructions.get(summary_length, length_instructions["concise"])}
2. Identify the primary issue or request clearly
3. Note any secondary issues that may need attention
4. Extract relevant customer background or context
5. List any troubleshooting steps the customer has already tried
6. Include timeline information about when issues started
7. Describe the business or personal impact on the customer
8. Assess current resolution status based on the inquiry
9. Extract key technical details, error messages, or specific information
10. Determine if follow-up communication will be needed

Resolution Status Categories:
- not_started: New inquiry, no resolution attempts yet
- in_progress: Customer has tried some solutions, but issue persists
- needs_escalation: Complex issue requiring specialized attention
- resolved: Issue appears to be resolved based on customer feedback

Key Details to Extract:
- Error messages or codes
- Product versions or configurations
- Account information (without sensitive data)
- Technical specifications
- Business impact details
- Deadline or time constraints
- Previous ticket references

Impact Assessment:
- Business operations affected
- Revenue implications
- User experience degradation
- Time-sensitive requirements
- Reputation concerns

Focus on:
- Factual information over emotional content
- Actionable details that help resolution
- Context that aids in prioritization
- Clear distinction between symptoms and root causes
- Relevant background without unnecessary details

IMPORTANT: Provide summary responses in the same language as the input text, except for the
predefined categorical field (resolution_status) which must use the exact English values
specified above (not_started, in_progress, needs_escalation, resolved). For example, if the
input is in German, provide all summary content in German, but use English values like
"in_progress" for resolution_status.

Provide accurate, actionable summary that enables efficient support resolution."""

    return PreparedTask(instructions=instructions, response_format=InquirySummary, api_kwargs=api_kwargs)
intent_analysis

Intent analysis task for customer support interactions.

This module provides a predefined task for analyzing customer intent to understand what the customer is trying to achieve and how to best assist them.

Example

Basic usage with BatchResponses:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import customer_support

client = OpenAI()
analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=customer_support.INTENT_ANALYSIS
)

inquiries = [
    "I want to upgrade my plan to get more storage",
    "How do I delete my account? I'm not satisfied with the service",
    "Can you walk me through setting up the mobile app?"
]
intents = analyzer.parse(inquiries)

for intent in intents:
    print(f"Primary Intent: {intent.primary_intent}")
    print(f"Action Required: {intent.action_required}")
    print(f"Success Likelihood: {intent.success_likelihood}")
    print(f"Next Steps: {intent.next_steps}")

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import customer_support

df = pd.DataFrame({"inquiry": [
    "I want to upgrade my plan to get more storage",
    "How do I delete my account? I'm not satisfied with the service",
    "Can you walk me through setting up the mobile app?"
]})
df["intent"] = df["inquiry"].ai.task(customer_support.INTENT_ANALYSIS)

# Extract intent components
extracted_df = df.ai.extract("intent")
print(extracted_df[["inquiry", "intent_primary_intent", "intent_action_required", "intent_success_likelihood"]])

Attributes:

Name Type Description
INTENT_ANALYSIS PreparedTask

A prepared task instance configured for intent analysis with temperature=0.0 and top_p=1.0 for deterministic output.

Classes Functions
intent_analysis
intent_analysis(
    business_context: str = "general customer support",
    **api_kwargs,
) -> PreparedTask

Create a configurable intent analysis task.

Parameters:

Name Type Description Default
business_context str

Business context for intent analysis.

'general customer support'
**api_kwargs

Additional keyword arguments to pass to the OpenAI API, such as temperature, top_p, etc.

{}

Returns:

Type Description
PreparedTask

PreparedTask configured for intent analysis.

Source code in src/openaivec/task/customer_support/intent_analysis.py
def intent_analysis(business_context: str = "general customer support", **api_kwargs) -> PreparedTask:
    """Create a configurable intent analysis task.

    Args:
        business_context (str): Business context for intent analysis.
        **api_kwargs: Additional keyword arguments to pass to the OpenAI API,
            such as temperature, top_p, etc.

    Returns:
        PreparedTask configured for intent analysis.
    """

    instructions = f"""Analyze customer intent to understand their goals, needs, and how to best assist them.

Business Context: {business_context}

Primary Intent Categories:
- get_help: Seeking assistance with existing product or service
- make_purchase: Interested in buying or upgrading service
- cancel_service: Wants to terminate subscription or service
- get_refund: Seeking monetary reimbursement
- report_issue: Reporting problems or bugs
- seek_information: Looking for details about products, policies, or procedures
- request_feature: Asking for new functionality or improvements
- provide_feedback: Sharing opinions, suggestions, or experiences

Action Required:
- provide_information: Share knowledge, documentation, or explanations
- troubleshoot: Diagnose and resolve technical issues
- process_request: Handle account changes, orders, or service requests
- escalate: Transfer to specialized team or management
- redirect: Point to appropriate resources or departments
- schedule_callback: Arrange follow-up communication

Success Likelihood Factors:
- very_high: Simple request, clear solution available
- high: Standard procedure, likely to be resolved quickly
- medium: Requires some investigation or coordination
- low: Complex issue, may need multiple touchpoints
- very_low: Unclear requirements, potential policy conflicts

Resolution Complexity:
- simple: Can be resolved in single interaction with standard procedures
- moderate: May require 2-3 interactions or coordination with another team
- complex: Requires significant investigation, multiple teams, or policy exceptions
- very_complex: Involves technical issues, legal considerations, or major system changes

Analysis Guidelines:
1. Look for explicit statements of what customer wants
2. Identify implicit needs based on context and emotional state
3. Consider potential blocking factors (technical, policy, or procedural)
4. Assess realistic success likelihood based on typical resolution patterns
5. Recommend specific next steps that advance toward customer goal

Pay attention to:
- Direct requests: "I want to...", "I need to...", "Can you help me..."
- Problem statements: "I'm having trouble with...", "It's not working..."
- Emotional context: Frustration may indicate deeper issues beyond stated problem
- Urgency indicators: Time pressure affects resolution approach
- Previous interactions: References to prior support contacts

IMPORTANT: Provide analysis responses in the same language as the input text, except for the
predefined categorical fields (primary_intent, action_required, success_likelihood,
resolution_complexity) which must use the exact English values specified above. For example,
if the input is in Japanese, provide customer_goal, implicit_needs, blocking_factors,
next_steps, and reasoning in Japanese, but use English values like "get_help" for primary_intent.

Provide comprehensive intent analysis with actionable recommendations."""

    return PreparedTask(instructions=instructions, response_format=IntentAnalysis, api_kwargs=api_kwargs)
response_suggestion

Response suggestion task for customer support interactions.

This module provides a predefined task for generating suggested responses to customer inquiries, helping support agents provide consistent, helpful, and professional communication.

Example

Basic usage with BatchResponses:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import customer_support

client = OpenAI()
responder = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=customer_support.RESPONSE_SUGGESTION
)

inquiries = [
    "I can't access my account. I've tried resetting my password but the email never arrives.",
    "I'm really disappointed with your service. This is the third time I've had issues.",
    "Thank you for your help yesterday! The problem is now resolved."
]
responses = responder.parse(inquiries)

for response in responses:
    print(f"Suggested Response: {response.suggested_response}")
    print(f"Tone: {response.tone}")
    print(f"Priority: {response.priority}")
    print(f"Follow-up: {response.follow_up_required}")

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import customer_support

df = pd.DataFrame({"inquiry": [
    "I can't access my account. I've tried resetting my password but the email never arrives.",
    "I'm really disappointed with your service. This is the third time I've had issues."
]})
df["response"] = df["inquiry"].ai.task(customer_support.RESPONSE_SUGGESTION)

# Extract response components
extracted_df = df.ai.extract("response")
print(extracted_df[["inquiry", "response_suggested_response", "response_tone", "response_priority"]])

Attributes:

Name Type Description
RESPONSE_SUGGESTION PreparedTask

A prepared task instance configured for response suggestion with temperature=0.0 and top_p=1.0 for deterministic output.

Classes Functions
response_suggestion
response_suggestion(
    response_style: str = "professional",
    company_name: str = "our company",
    business_context: str = "general customer support",
    **api_kwargs,
) -> PreparedTask

Create a configurable response suggestion task.

Parameters:

Name Type Description Default
response_style str

Style of response (professional, friendly, empathetic, formal).

'professional'
company_name str

Name of the company for personalization.

'our company'
business_context str

Business context for responses.

'general customer support'
**api_kwargs

Additional keyword arguments to pass to the OpenAI API, such as temperature, top_p, etc.

{}

Returns:

Type Description
PreparedTask

PreparedTask configured for response suggestions.

Source code in src/openaivec/task/customer_support/response_suggestion.py
def response_suggestion(
    response_style: str = "professional",
    company_name: str = "our company",
    business_context: str = "general customer support",
    **api_kwargs,
) -> PreparedTask:
    """Create a configurable response suggestion task.

    Args:
        response_style (str): Style of response (professional, friendly, empathetic, formal).
        company_name (str): Name of the company for personalization.
        business_context (str): Business context for responses.
        **api_kwargs: Additional keyword arguments to pass to the OpenAI API,
            such as temperature, top_p, etc.

    Returns:
        PreparedTask configured for response suggestions.
    """

    style_instructions = {
        "professional": "Maintain professional tone with clear, direct communication",
        "friendly": "Use warm, approachable language while remaining professional",
        "empathetic": "Show understanding and compassion for customer concerns",
        "formal": "Use formal business language appropriate for official communications",
    }

    instructions = f"""Generate a professional, helpful response suggestion for the customer
inquiry that addresses their needs effectively.

Business Context: {business_context}
Company Name: {company_name}
Response Style: {style_instructions.get(response_style, style_instructions["professional"])}

Response Guidelines:
1. Address the customer's main concern directly
2. Use appropriate tone based on customer sentiment
3. Provide clear next steps or solutions
4. Include empathy when dealing with frustrated customers
5. Maintain professional standards while being human
6. Offer specific help rather than generic responses
7. Set appropriate expectations for resolution time
8. Include any necessary disclaimers or policy information

Tone Selection:
- empathetic: For frustrated, disappointed, or upset customers
- professional: For business inquiries, formal requests, or complex issues
- friendly: For positive interactions, thank you messages, or simple questions
- apologetic: For service failures, bugs, or company mistakes
- solution_focused: For technical issues requiring specific steps

Response Types:
- acknowledgment: Confirming receipt and understanding of the inquiry
- solution: Providing direct answers or resolution steps
- escalation: Transferring to appropriate team or management
- information_request: Asking for additional details to help resolve
- closure: Confirming resolution and checking customer satisfaction

Priority Levels:
- immediate: Critical issues requiring instant response
- high: Urgent problems needing quick attention
- medium: Standard inquiries with normal response time
- low: General questions or feedback with flexible timing

Key Elements to Include:
- Acknowledge the customer's specific issue
- Show understanding of their frustration or needs
- Provide clear, actionable next steps
- Set realistic expectations for resolution
- Offer additional assistance if needed
- Include relevant contact information or resources

Response Structure:
1. Opening: Acknowledge and thank the customer
2. Empathy: Show understanding of their situation
3. Solution: Provide specific help or next steps
4. Follow-up: Offer continued assistance
5. Closing: Professional sign-off

Personalization Considerations:
- Use customer's name if provided
- Reference specific details from their inquiry
- Acknowledge their loyalty or relationship length
- Tailor language to their communication style
- Consider their apparent technical expertise level

Avoid:
- Generic, templated responses
- Overly technical language for non-technical customers
- Making promises that can't be kept
- Dismissing customer concerns
- Lengthy responses that don't address the main issue

IMPORTANT: Generate responses in the same language as the input text, except for the predefined
categorical fields (tone, priority, response_type, estimated_resolution_time) which must use
the exact English values specified above. For example, if the input is in Italian, provide
suggested_response, key_points, alternative_responses, and personalization_notes in Italian,
but use English values like "empathetic" for tone.

Generate helpful, professional response that moves toward resolution while maintaining
positive customer relationship."""

    return PreparedTask(instructions=instructions, response_format=ResponseSuggestion, api_kwargs=api_kwargs)
urgency_analysis

Urgency analysis task for customer support.

This module provides a configurable task for analyzing the urgency level of customer inquiries to help prioritize support queue and response times.

Example

Basic usage with default settings:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import customer_support

client = OpenAI()
analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=customer_support.urgency_analysis()
)

inquiries = [
    "URGENT: My website is down and I'm losing customers!",
    "Can you help me understand how to use the new feature?",
    "I haven't received my order from last week"
]
analyses = analyzer.parse(inquiries)

for analysis in analyses:
    print(f"Urgency Level: {analysis.urgency_level}")
    print(f"Score: {analysis.urgency_score}")
    print(f"Response Time: {analysis.response_time}")
    print(f"Escalation: {analysis.escalation_required}")

Customized for SaaS platform with business hours:

from openaivec.task import customer_support

# SaaS-specific urgency levels
saas_urgency_levels = {
    "critical": "Service outages, security breaches, data loss",
    "high": "Login issues, payment failures, API errors",
    "medium": "Feature bugs, performance issues, billing questions",
    "low": "Feature requests, documentation questions, general feedback"
}

# Custom response times based on SLA
saas_response_times = {
    "critical": "immediate",
    "high": "within_1_hour",
    "medium": "within_4_hours",
    "low": "within_24_hours"
}

# Enterprise customer tier gets priority
enterprise_customer_tiers = {
    "enterprise": "Priority support, dedicated account manager",
    "business": "Standard business support",
    "professional": "Professional plan support",
    "starter": "Basic support"
}

task = customer_support.urgency_analysis(
    urgency_levels=saas_urgency_levels,
    response_times=saas_response_times,
    customer_tiers=enterprise_customer_tiers,
    business_context="SaaS platform",
    business_hours="9 AM - 5 PM EST, Monday-Friday"
)

analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=task
)

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import customer_support

df = pd.DataFrame({"inquiry": [
    "URGENT: My website is down and I'm losing customers!",
    "Can you help me understand how to use the new feature?",
    "I haven't received my order from last week"
]})
df["urgency"] = df["inquiry"].ai.task(customer_support.urgency_analysis())

# Extract urgency components
extracted_df = df.ai.extract("urgency")
print(extracted_df[["inquiry", "urgency_urgency_level", "urgency_urgency_score", "urgency_response_time"]])
Classes Functions
urgency_analysis
urgency_analysis(
    urgency_levels: Dict[str, str] | None = None,
    response_times: Dict[str, str] | None = None,
    customer_tiers: Dict[str, str] | None = None,
    escalation_rules: Dict[str, str] | None = None,
    urgency_keywords: Dict[str, list[str]] | None = None,
    business_context: str = "general customer support",
    business_hours: str = "24/7 support",
    sla_rules: Dict[str, str] | None = None,
    **api_kwargs,
) -> PreparedTask

Create a configurable urgency analysis task.

Parameters:

Name Type Description Default
urgency_levels dict[str, str] | None

Dictionary mapping urgency levels to descriptions.

None
response_times dict[str, str] | None

Dictionary mapping urgency levels to response times.

None
customer_tiers dict[str, str] | None

Dictionary mapping tier names to descriptions.

None
escalation_rules dict[str, str] | None

Dictionary mapping conditions to escalation actions.

None
urgency_keywords dict[str, list[str]] | None

Dictionary mapping urgency levels to indicator keywords.

None
business_context str

Description of the business context.

'general customer support'
business_hours str

Description of business hours for response time calculation.

'24/7 support'
sla_rules dict[str, str] | None

Dictionary mapping customer tiers to SLA requirements.

None
**api_kwargs

Additional keyword arguments to pass to the OpenAI API, such as temperature, top_p, etc.

{}

Returns:

Type Description
PreparedTask

PreparedTask configured for urgency analysis.

Source code in src/openaivec/task/customer_support/urgency_analysis.py
def urgency_analysis(
    urgency_levels: Dict[str, str] | None = None,
    response_times: Dict[str, str] | None = None,
    customer_tiers: Dict[str, str] | None = None,
    escalation_rules: Dict[str, str] | None = None,
    urgency_keywords: Dict[str, list[str]] | None = None,
    business_context: str = "general customer support",
    business_hours: str = "24/7 support",
    sla_rules: Dict[str, str] | None = None,
    **api_kwargs,
) -> PreparedTask:
    """Create a configurable urgency analysis task.

    Args:
        urgency_levels (dict[str, str] | None): Dictionary mapping urgency levels to descriptions.
        response_times (dict[str, str] | None): Dictionary mapping urgency levels to response times.
        customer_tiers (dict[str, str] | None): Dictionary mapping tier names to descriptions.
        escalation_rules (dict[str, str] | None): Dictionary mapping conditions to escalation actions.
        urgency_keywords (dict[str, list[str]] | None): Dictionary mapping urgency levels to indicator keywords.
        business_context (str): Description of the business context.
        business_hours (str): Description of business hours for response time calculation.
        sla_rules (dict[str, str] | None): Dictionary mapping customer tiers to SLA requirements.
        **api_kwargs: Additional keyword arguments to pass to the OpenAI API,
            such as temperature, top_p, etc.

    Returns:
        PreparedTask configured for urgency analysis.
    """

    # Default urgency levels
    if urgency_levels is None:
        urgency_levels = {
            "critical": "Service outages, security breaches, data loss, system failures affecting business operations",
            "high": "Account locked, payment failures, urgent deadlines, angry customers, revenue-impacting issues",
            "medium": "Feature not working, delivery delays, billing questions, moderate customer frustration",
            "low": "General questions, feature requests, feedback, compliments, minor issues",
        }

    # Default response times
    if response_times is None:
        response_times = {
            "critical": "immediate",
            "high": "within_1_hour",
            "medium": "within_4_hours",
            "low": "within_24_hours",
        }

    # Default customer tiers
    if customer_tiers is None:
        customer_tiers = {
            "enterprise": "Large contracts, multiple users, business-critical usage",
            "premium": "Paid plans, professional use, higher expectations",
            "standard": "Regular paid users, normal expectations",
            "basic": "Free users, casual usage, lower priority",
        }

    # Default escalation rules
    if escalation_rules is None:
        escalation_rules = {
            "immediate": "Critical issues, security breaches, service outages",
            "within_1_hour": "High urgency with customer tier enterprise or premium",
            "manager_review": "Threats to cancel, legal language, compliance issues",
            "no_escalation": "Standard support can handle",
        }

    # Default urgency keywords
    if urgency_keywords is None:
        urgency_keywords = {
            "critical": ["urgent", "emergency", "critical", "down", "outage", "security", "breach", "immediate"],
            "high": ["ASAP", "urgent", "problem", "issue", "error", "bug", "frustrated", "angry"],
            "medium": ["question", "help", "support", "feedback", "concern", "delayed"],
            "low": ["information", "thank", "compliment", "suggestion", "general", "when convenient"],
        }

    # Default SLA rules
    if sla_rules is None:
        sla_rules = {
            "enterprise": "Critical: 15min, High: 1hr, Medium: 4hr, Low: 24hr",
            "premium": "Critical: 30min, High: 2hr, Medium: 8hr, Low: 48hr",
            "standard": "Critical: 1hr, High: 4hr, Medium: 24hr, Low: 72hr",
            "basic": "Critical: 4hr, High: 24hr, Medium: 72hr, Low: 1week",
        }

    # Build urgency levels section
    urgency_text = "Urgency Levels:\n"
    for level, description in urgency_levels.items():
        urgency_text += f"- {level}: {description}\n"

    # Build response times section
    response_text = "Response Times:\n"
    for level, time in response_times.items():
        response_text += f"- {level}: {time}\n"

    # Build customer tiers section
    tiers_text = "Customer Tiers:\n"
    for tier, description in customer_tiers.items():
        tiers_text += f"- {tier}: {description}\n"

    # Build escalation rules section
    escalation_text = "Escalation Rules:\n"
    for condition, action in escalation_rules.items():
        escalation_text += f"- {condition}: {action}\n"

    # Build urgency keywords section
    keywords_text = "Urgency Keywords:\n"
    for level, keywords in urgency_keywords.items():
        keywords_text += f"- {level}: {', '.join(keywords)}\n"

    # Build SLA rules section
    sla_text = "SLA Rules:\n"
    for tier, sla in sla_rules.items():
        sla_text += f"- {tier}: {sla}\n"

    instructions = f"""Analyze the urgency level of the customer inquiry based on language, content, and context.

Business Context: {business_context}
Business Hours: {business_hours}

{urgency_text}

{response_text}

{tiers_text}

{escalation_text}

{keywords_text}

{sla_text}

Instructions:
1. Analyze the inquiry in the context of: {business_context}
2. Identify urgency indicators in the language and content
3. Classify into the appropriate urgency level
4. Calculate urgency score (0.0-1.0) based on multiple factors
5. Recommend response time based on urgency level and customer tier
6. Determine if escalation is required based on configured rules
7. Assess potential business impact
8. Infer customer tier from language and context
9. Provide reasoning for the urgency assessment
10. Check SLA compliance with recommended response time

Consider:
- Explicit urgency language and keywords
- Emotional tone and intensity
- Business impact indicators
- Time pressure and deadlines
- Customer tier indicators
- Previous escalation language
- Revenue or operational impact
- Compliance or legal implications

IMPORTANT: Provide analysis responses in the same language as the input text, except for the
predefined categorical fields (urgency_level, response_time, business_impact, customer_tier)
which must use the exact English values specified above. For example, if the input is in French,
provide urgency_indicators and reasoning in French, but use English values like "critical" for
urgency_level.

Provide detailed analysis with clear reasoning for urgency level and response time recommendations."""

    return PreparedTask(instructions=instructions, response_format=UrgencyAnalysis, api_kwargs=api_kwargs)

nlp

Modules
dependency_parsing

Dependency parsing task for OpenAI API.

This module provides a predefined task for dependency parsing that analyzes syntactic dependencies between words in sentences using OpenAI's language models.

Example

Basic usage with BatchResponses:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import nlp

client = OpenAI()
analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=nlp.DEPENDENCY_PARSING
)

texts = ["The cat sat on the mat.", "She quickly ran to the store."]
analyses = analyzer.parse(texts)

for analysis in analyses:
    print(f"Tokens: {analysis.tokens}")
    print(f"Dependencies: {analysis.dependencies}")
    print(f"Root: {analysis.root_word}")

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import nlp

df = pd.DataFrame({"text": ["The cat sat on the mat.", "She quickly ran to the store."]})
df["parsing"] = df["text"].ai.task(nlp.DEPENDENCY_PARSING)

# Extract parsing components
extracted_df = df.ai.extract("parsing")
print(extracted_df[["text", "parsing_tokens", "parsing_root_word", "parsing_syntactic_structure"]])

Attributes:

Name Type Description
DEPENDENCY_PARSING PreparedTask

A prepared task instance configured for dependency parsing with temperature=0.0 and top_p=1.0 for deterministic output.

Classes
keyword_extraction

Keyword extraction task for OpenAI API.

This module provides a predefined task for keyword extraction that identifies important keywords and phrases from text using OpenAI's language models.

Example

Basic usage with BatchResponses:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import nlp

client = OpenAI()
analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=nlp.KEYWORD_EXTRACTION
)

texts = ["Machine learning is transforming the technology industry.",
         "Climate change affects global weather patterns."]
analyses = analyzer.parse(texts)

for analysis in analyses:
    print(f"Keywords: {analysis.keywords}")
    print(f"Key phrases: {analysis.keyphrases}")
    print(f"Topics: {analysis.topics}")

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import nlp

df = pd.DataFrame({"text": ["Machine learning is transforming the technology industry.",
                           "Climate change affects global weather patterns."]})
df["keywords"] = df["text"].ai.task(nlp.KEYWORD_EXTRACTION)

# Extract keyword components
extracted_df = df.ai.extract("keywords")
print(extracted_df[["text", "keywords_keywords", "keywords_topics", "keywords_summary"]])

Attributes:

Name Type Description
KEYWORD_EXTRACTION PreparedTask

A prepared task instance configured for keyword extraction with temperature=0.0 and top_p=1.0 for deterministic output.

Classes
morphological_analysis

Morphological analysis task for OpenAI API.

This module provides a predefined task for morphological analysis including tokenization, part-of-speech tagging, and lemmatization using OpenAI's language models.

Example

Basic usage with BatchResponses:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import nlp

client = OpenAI()
analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=nlp.MORPHOLOGICAL_ANALYSIS
)

texts = ["Running quickly", "The cats are sleeping"]
analyses = analyzer.parse(texts)

for analysis in analyses:
    print(f"Tokens: {analysis.tokens}")
    print(f"POS Tags: {analysis.pos_tags}")
    print(f"Lemmas: {analysis.lemmas}")

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import nlp

df = pd.DataFrame({"text": ["Running quickly", "The cats are sleeping"]})
df["analysis"] = df["text"].ai.task(nlp.MORPHOLOGICAL_ANALYSIS)

# Extract analysis components
extracted_df = df.ai.extract("analysis")
print(extracted_df[["text", "analysis_tokens", "analysis_pos_tags", "analysis_lemmas"]])

Attributes:

Name Type Description
MORPHOLOGICAL_ANALYSIS PreparedTask

A prepared task instance configured for morphological analysis with temperature=0.0 and top_p=1.0 for deterministic output.

Classes
named_entity_recognition

Named entity recognition task for OpenAI API.

This module provides a predefined task for named entity recognition that identifies and classifies named entities in text using OpenAI's language models.

Example

Basic usage with BatchResponses:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import nlp

client = OpenAI()
analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=nlp.NAMED_ENTITY_RECOGNITION
)

texts = ["John works at Microsoft in Seattle", "The meeting is on March 15th"]
analyses = analyzer.parse(texts)

for analysis in analyses:
    print(f"Persons: {analysis.persons}")
    print(f"Organizations: {analysis.organizations}")
    print(f"Locations: {analysis.locations}")

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import nlp

df = pd.DataFrame({"text": ["John works at Microsoft in Seattle", "The meeting is on March 15th"]})
df["entities"] = df["text"].ai.task(nlp.NAMED_ENTITY_RECOGNITION)

# Extract entity components
extracted_df = df.ai.extract("entities")
print(extracted_df[["text", "entities_persons", "entities_organizations", "entities_locations"]])

Attributes:

Name Type Description
NAMED_ENTITY_RECOGNITION PreparedTask

A prepared task instance configured for named entity recognition with temperature=0.0 and top_p=1.0 for deterministic output.

Classes
sentiment_analysis

Sentiment analysis task for OpenAI API.

This module provides a predefined task for sentiment analysis that analyzes sentiment and emotions in text using OpenAI's language models.

Example

Basic usage with BatchResponses:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import nlp

client = OpenAI()
analyzer = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=nlp.SENTIMENT_ANALYSIS
)

texts = ["I love this product!", "This is terrible and disappointing."]
analyses = analyzer.parse(texts)

for analysis in analyses:
    print(f"Sentiment: {analysis.sentiment}")
    print(f"Confidence: {analysis.confidence}")
    print(f"Emotions: {analysis.emotions}")

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import nlp

df = pd.DataFrame({"text": ["I love this product!", "This is terrible and disappointing."]})
df["sentiment"] = df["text"].ai.task(nlp.SENTIMENT_ANALYSIS)

# Extract sentiment components
extracted_df = df.ai.extract("sentiment")
print(extracted_df[["text", "sentiment_sentiment", "sentiment_confidence", "sentiment_polarity"]])

Attributes:

Name Type Description
SENTIMENT_ANALYSIS PreparedTask

A prepared task instance configured for sentiment analysis with temperature=0.0 and top_p=1.0 for deterministic output.

Classes
translation

Multilingual translation task for OpenAI API.

This module provides a predefined task that translates text into multiple languages using OpenAI's language models. The translation covers a comprehensive set of languages including Germanic, Romance, Slavic, East Asian, South Asian, Southeast Asian, Middle Eastern, African, and other language families.

The task is designed to be used with the OpenAI API for batch processing and provides structured output with consistent language code naming.

Example

Basic usage with BatchResponses:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task import nlp

client = OpenAI()
translator = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=nlp.MULTILINGUAL_TRANSLATION
)

texts = ["Hello", "Good morning", "Thank you"]
translations = translator.parse(texts)

for translation in translations:
    print(f"English: {translation.en}")
    print(f"Japanese: {translation.ja}")
    print(f"Spanish: {translation.es}")

With pandas integration:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task import nlp

df = pd.DataFrame({"text": ["Hello", "Goodbye"]})
df["translations"] = df["text"].ai.task(nlp.MULTILINGUAL_TRANSLATION)

# Extract specific languages
extracted_df = df.ai.extract("translations")
print(extracted_df[["text", "translations_en", "translations_ja", "translations_fr"]])

Attributes:

Name Type Description
MULTILINGUAL_TRANSLATION PreparedTask

A prepared task instance configured for multilingual translation with temperature=0.0 and top_p=1.0 for deterministic output.

Note

The translation covers 58 languages across major language families. All field names use ISO 639-1 language codes where possible, with some exceptions like 'zh_tw' for Traditional Chinese and 'is_' for Icelandic (to avoid Python keyword conflicts).

Languages included: - Germanic: English, German, Dutch, Swedish, Danish, Norwegian, Icelandic - Romance: Spanish, French, Italian, Portuguese, Romanian, Catalan - Slavic: Russian, Polish, Czech, Slovak, Ukrainian, Bulgarian, Croatian, Serbian - East Asian: Japanese, Korean, Chinese (Simplified/Traditional) - South Asian: Hindi, Bengali, Telugu, Tamil, Urdu - Southeast Asian: Thai, Vietnamese, Indonesian, Malay, Filipino - Middle Eastern: Arabic, Hebrew, Persian, Turkish - African: Swahili, Amharic - Other European: Finnish, Hungarian, Estonian, Latvian, Lithuanian, Greek - Celtic: Welsh, Irish - Other: Basque, Maltese

Classes

table

Classes
FillNaResponse

Bases: BaseModel

Response model for missing value imputation results.

Contains the row index and the imputed value for a specific missing entry in the target column.

Modules
fillna

Missing value imputation task for DataFrame columns.

This module provides functionality to intelligently fill missing values in DataFrame columns using AI-powered analysis. The task analyzes existing data patterns to generate contextually appropriate values for missing entries.

Example

Basic usage with pandas DataFrame:

import pandas as pd
from openaivec import pandas_ext  # Required for .ai accessor
from openaivec.task.table import fillna

# Create DataFrame with missing values
df = pd.DataFrame({
    "name": ["Alice", "Bob", None, "David"],
    "age": [25, 30, 35, None],
    "city": ["New York", "London", "Tokyo", "Paris"],
    "salary": [50000, 60000, 70000, None]
})

# Fill missing values in the 'salary' column
task = fillna(df, "salary")
filled_salaries = df[df["salary"].isna()].ai.task(task)

# Apply filled values back to DataFrame
for result in filled_salaries:
    df.loc[result.index, "salary"] = result.output

With BatchResponses for more control:

from openai import OpenAI
from openaivec._responses import BatchResponses
from openaivec.task.table import fillna

client = OpenAI()
df = pd.DataFrame({...})  # Your DataFrame with missing values

# Create fillna task for target column
task = fillna(df, "target_column")

# Get rows with missing values in target column
missing_rows = df[df["target_column"].isna()]

# Process with BatchResponses
filler = BatchResponses.of_task(
    client=client,
    model_name="gpt-4.1-mini",
    task=task
)

# Generate inputs for missing rows
inputs = []
for idx, row in missing_rows.iterrows():
    inputs.append({
        "index": idx,
        "input": {k: v for k, v in row.items() if k != "target_column"}
    })

filled_values = filler.parse(inputs)
Classes
FillNaResponse

Bases: BaseModel

Response model for missing value imputation results.

Contains the row index and the imputed value for a specific missing entry in the target column.

Functions
fillna
fillna(
    df: DataFrame,
    target_column_name: str,
    max_examples: int = 500,
    **api_kwargs,
) -> PreparedTask

Create a prepared task for filling missing values in a DataFrame column.

Analyzes the provided DataFrame to understand data patterns and creates a configured task that can intelligently fill missing values in the specified target column. The task uses few-shot learning with examples extracted from non-null rows in the DataFrame.

Parameters:

Name Type Description Default
df DataFrame

Source DataFrame containing the data with missing values.

required
target_column_name str

Name of the column to fill missing values for. This column should exist in the DataFrame and contain some non-null values to serve as training examples.

required
max_examples int

Maximum number of example rows to use for few-shot learning. Defaults to 500. Higher values provide more context but increase token usage and processing time.

500
**api_kwargs

Additional keyword arguments to pass to the OpenAI API, such as temperature, top_p, etc.

{}

Returns:

Type Description
PreparedTask

PreparedTask configured for missing value imputation with:

PreparedTask
  • Instructions based on DataFrame patterns
PreparedTask
  • FillNaResponse format for structured output
PreparedTask
  • Default deterministic settings (temperature=0.0, top_p=1.0)

Raises:

Type Description
ValueError

If target_column_name doesn't exist in DataFrame, contains no non-null values for training examples, DataFrame is empty, or max_examples is not a positive integer.

Example
import pandas as pd
from openaivec.task.table import fillna

df = pd.DataFrame({
    "product": ["laptop", "phone", "tablet", "laptop"],
    "brand": ["Apple", "Samsung", None, "Dell"],
    "price": [1200, 800, 600, 1000]
})

# Create task to fill missing brand values
task = fillna(df, "brand")

# Use with pandas AI accessor
missing_brands = df[df["brand"].isna()].ai.task(task)
Source code in src/openaivec/task/table/fillna.py
def fillna(df: pd.DataFrame, target_column_name: str, max_examples: int = 500, **api_kwargs) -> PreparedTask:
    """Create a prepared task for filling missing values in a DataFrame column.

    Analyzes the provided DataFrame to understand data patterns and creates
    a configured task that can intelligently fill missing values in the
    specified target column. The task uses few-shot learning with examples
    extracted from non-null rows in the DataFrame.

    Args:
        df (pd.DataFrame): Source DataFrame containing the data with missing values.
        target_column_name (str): Name of the column to fill missing values for.
            This column should exist in the DataFrame and contain some
            non-null values to serve as training examples.
        max_examples (int): Maximum number of example rows to use for few-shot
            learning. Defaults to 500. Higher values provide more context
            but increase token usage and processing time.
        **api_kwargs: Additional keyword arguments to pass to the OpenAI API,
            such as temperature, top_p, etc.

    Returns:
        PreparedTask configured for missing value imputation with:
        - Instructions based on DataFrame patterns
        - FillNaResponse format for structured output
        - Default deterministic settings (temperature=0.0, top_p=1.0)

    Raises:
        ValueError: If target_column_name doesn't exist in DataFrame,
            contains no non-null values for training examples, DataFrame is empty,
            or max_examples is not a positive integer.

    Example:
        ```python
        import pandas as pd
        from openaivec.task.table import fillna

        df = pd.DataFrame({
            "product": ["laptop", "phone", "tablet", "laptop"],
            "brand": ["Apple", "Samsung", None, "Dell"],
            "price": [1200, 800, 600, 1000]
        })

        # Create task to fill missing brand values
        task = fillna(df, "brand")

        # Use with pandas AI accessor
        missing_brands = df[df["brand"].isna()].ai.task(task)
        ```
    """
    if df.empty:
        raise ValueError("DataFrame is empty.")
    if not isinstance(max_examples, int) or max_examples <= 0:
        raise ValueError("max_examples must be a positive integer.")
    if target_column_name not in df.columns:
        raise ValueError(f"Column '{target_column_name}' does not exist in the DataFrame.")
    if df[target_column_name].notna().sum() == 0:
        raise ValueError(f"Column '{target_column_name}' contains no non-null values for training examples.")
    instructions = get_instructions(df, target_column_name, max_examples)
    # Set default values for deterministic results if not provided
    if not api_kwargs:
        api_kwargs = {"temperature": 0.0, "top_p": 1.0}
    return PreparedTask(instructions=instructions, response_format=FillNaResponse, api_kwargs=api_kwargs)