Lnu-AI Storyteller - Powered by GPT-4-0613

Lnu-AI: Storyteller :feather:

Lnu-AI is an Artificial Intelligence system developed to serve as a bridge between the Mi’kmaq language and AI. This platform is rooted in the deep commitment to cultural preservation, leveraging modern technologies like machine learning and natural language processing to aid in the revitalization of the Mi’kmaq language.

We are excited to offer the stand-alone Storyteller feature. This has all the same story generation logic from the Lnu-AI system.

Storytelling holds a revered place in Indigenous cultures, passing on the richness of history and creating narratives for future generations to treasure.

With immense pride, I announce the launch of Lnu-AI Storyteller, powered by OpenAI GPT-4-0613 API. This innovative system has been meticulously pre-trained and fine-tuned with all recorded Mi’kmaq words, expressed both in writing and through the voice translations from native speakers.

By entering a single English word, this program unfurls a vivid, detail-oriented story, accompanied by expressive illustrations crafted by a unique logical process that culminates with colorful and vibrant images using Dall-E API. In the background the program translates and finds the most fitting Mi’kmaq word matches, and generates an engaging narrative in English that are told by including Mi’kmaq themes and meanings.

While English translations is the default translation, the incorporation of gTTS technology enables Lnu-AI Storyteller to narrate captivating tales in over 116 different languages!

In the Mi’kmaq language, A’tugwewinu signifies ‘storyteller’. It’s an honor to present to you, the heartwarming tales of the Mi’kmaq people.

You can enjoy the Storyteller feature by clicking the link below.
Streamlit App

Story Feature Overview

The Storyteller feature of Lnu-AI represents a blend of tradition and technology, aimed at preserving the age-old practice of storytelling in the Mi’kmaq culture while leveraging the state-of-the-art capabilities of AI. By utilizing advanced natural language processing and machine learning, the Storyteller feature creates a platform where users can interact, learn, and connect deeply with the Mi’kmaq language and culture just by entering a single word.


  • AI-Powered Narrations: The Storyteller feature uses the Embedded Mi’kmaq Corpus and then uses AI to bring to life vibrant narratives that are richly infused with Mi’kmaq phrases, words, and concepts, creating a unique linguistic experience for users.

  • Story Sessions: The feature allows for engaging and interactive story sessions that are randomly selected, ensuring that each story has different themes and context.

  • Cultural Preservation: By generating stories in both the Mi’kmaq and English language, the feature serves as an effective tool for cultural preservation, facilitating language learning and fostering a deeper appreciation of Mi’kmaq culture.

The project is located at:


Hi @adielaine,

Thanks for posting and welcome to the Streamlit Community Forum! :raised_hands:t5:

This is awesome, great job!

I’m curious, did you consider using other TTS services that might have licensed voice options of Mi’kmaq language people or famous people?

Happy Streamlit-ing! :balloon:

1 Like

Hi Tony,

I did consider it, this has some back story to it. The goal with the total Lnu-AI system is to provide a framework to bring indigenous language to life. The gTTS solution was a means to get the language generated and to translated to over 161 languages. The audio generation was rather challenging with working with streamlits wav st.audio limits. But by adding logic to the gTTS to convert the mp3 to wav, it allowed the translations and audio to play clearly and for the most part, articulate the language. Voice actors or trained indigenous actors would not capture the language unless they were native speakers of the Mi’kmaq language, so I trained the program to annunciate and provide proper inflection based on the linguistic structure with the language. Here is a snippet from one of the data files that are used by the program to speak the words.

"agase'wa'latl": {
    "word": "agase'wa'latl",
    "pronunciation": "a·ga·see·waa·la·dêl",
    "part_of_speech": "verb animate transitive",
    "translation": "He/she hires him/her",
    "meanings": [
    "example": [
        "Ulagu agase'wa'lapnn Sa'nal.\nYesterday he/she hired Sean.\n"
    "alternate_forms": []

Some logic data of what the system was pre-trained on. These are examples of the some of the word specific format.


        vowels_data = {
            "Front": ["i", "e", ""],
            "Central": ["iː", "eː", "a"],
            "Back": ["u", "o", "aː"],
            "Length": ["short", "long", "long"]
        vowels_df = pd.DataFrame(vowels_data)


        consonants_data = {
            "Labial": ["m", "p", "", ""],
            "Alveolar": ["n", "t", "s", "l"],
            "Palatal": ["", "t͡ʃ", "", "j"],
            "Velar (plain)": ["", "k", "x", ""],
            "Velar (lab.)": ["", "kʷ", "xʷ", "w"]
        consonants_df = pd.DataFrame(consonants_data)

Long story short, licensing did not seem like a solution that would meet my use-case, so I created the logic to meet the objective. Thank you for asking!


1 Like

That makes sense. I like the app!

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.