• Что бы вступить в ряды "Принятый кодер" Вам нужно:
    Написать 10 полезных сообщений или тем и Получить 10 симпатий.
    Для того кто не хочет терять время,может пожертвовать средства для поддержки сервеса, и вступить в ряды VIP на месяц, дополнительная информация в лс.

  • Пользаватели которые будут спамить, уходят в бан без предупреждения. Спам сообщения определяется администрацией и модератором.

  • Гость, Что бы Вы хотели увидеть на нашем Форуме? Изложить свои идеи и пожелания по улучшению форума Вы можете поделиться с нами здесь. ----> Перейдите сюда
  • Все пользователи не прошедшие проверку электронной почты будут заблокированы. Все вопросы с разблокировкой обращайтесь по адресу электронной почте : info@guardianelinks.com . Не пришло сообщение о проверке или о сбросе также сообщите нам.

SnipMail - AI Powered Summary and Sentiment Inbox WebApp

Sascha Оффлайн

Sascha

Заместитель Администратора
Команда форума
Администратор
Регистрация
9 Май 2015
Сообщения
1,483
Баллы
155
This is a submission for the

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

.

What I Built


A webapp that parse incoming inbox via webhook from Postmark and use AI to:

? Summarize emails
? Categorize them
? Rank them based on the sentiment value
? Calculate the indicator for fraud, scam, phising, blackmail, etc
?️ Calculate the indicator for spam

Demo



Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



How to test:

  1. Go to the demo website.
  2. Fill username (lower case, digits, and -_ only) and password, then click register
  3. Once you register a new account, follow the instructions on the main page.


Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.




Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.



Code Repository


A simple web app that summarize and categorize email inbox with the help of LLM like ChatGPT, Gemini, or Claude, and rank them based on the sentiment value, fraud indicator, and spam indicator.

Prequisite

This project uses

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

to develop and building the source code.

Backend Setup

The backend uses SQLite as the database with Drizzle as ORM. By default it uses better-sqlite3 under the hood but you can make adjustment it on drizzle.config.ts config file and src/lib/server/db/index.ts.


# Generate SQLite .db file and push DB tables and schema
bun run db:generate
bun run db:push


Env variables

DATABASE_URL=local.db # database if the db driver needs it
POSTMARK_INBOUND_EMAIL_ADDRESS=yourinboundhash@inbound.postmarkapp.com
# NOTE: only fill in API keys for LLM you're intending to use,
# otherwise, leave them blank so the unused LLM will be filtered out
# in the server runtime.
ANTHROPIC_API_KEY=XXXXXXXXXXXXX
OPENAI_API_KEY=XXXXXXXXXXXx
GEMINI_API_KEY= # Blank API key will be filtered

How I Built It


This project is built with Svelte 5 + SvelteKit as the backend, developed in Bun runtime. Most code written in Typescript and for the frontend UI I mostly use Tailwind and

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

.

As the backend, the codebase is pretty flexible due to using Drizzle ORM. By default, it uses SQLite as the database, and in the demo it uses Cloudflare Durable Objects as the SQLite driver and Cloudflare Workers as platform of choice.

However, SvelteKit and DrizzleORM is pretty flexible as to how it can setup the backend stack. SvelteKit allows the project to be flexible shipping to platforms just by changing its adapters. Netlify, Vercel, dedicated VPS with Node or Nun, you name it. And certainly Drizzle allows more room to tweak the database as how we see fit. It is quite trivial to change the database driver, allowing more options such as LibSQL, Turso, Cloudflare D1, Bun SQL, Node better-sqlite3, and so on. It is also quite possible to change the database stack such as PostgreSQL and MySQL - at the moment the Drizzle schemas are pretty simple and there's only one view table that needed an extra attention in order to migrate from SQLite database.

Going further, I'm also leveraging

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

to make the stack flexible not just in the backend stack, but also the AI side of things. Just take a look on how I setup my LLM, it is possible to use several LLMs at the same time!


export const LLMs = {
structuredOutputs: [
!!env.OPENAI_API_KEY && openai("gpt-4.1", { structuredOutputs: true }),
!!env.GEMINI_API_KEY && gemini("gemini-2.5-flash-preview-04-17", { structuredOutputs: true }),
].filter((llm) => !!llm),
text: [
!!env.ANTHROPIC_API_KEY && anthropic("claude-4-opus-20250514"),
!!env.OPENAI_API_KEY && openai("gpt-4.1"),
!!env.GEMINI_API_KEY && gemini("gemini-2.5-flash-preview-04-17"),
].filter((llm) => !!llm),
};

interface LLMOpts {
structuredOutputs?: boolean;
}

export const getRandomLLM = (opts?: LLMOpts): LanguageModelV1 => {
if (opts?.structuredOutputs)
return LLMs.structuredOutputs[Math.floor(Math.random() * LLMs.structuredOutputs.length)];

return LLMs.text[Math.floor(Math.random() * LLMs.text.length)];
};




Speaking of LLM, exploring AI in typescript turned out quite a fun experience for me. As per 2 weeks of building during the Postmark challenge duration I explore mostly the structured object outputs capability of some of the LLM. However during that time, using zod with LLM is such a blast. Composing and instructing LLM right inside the schema feels just right, the level of fine-grained control for the system instruction within zod is making me want to work with structured object all the time. Would love to start another project with them!

Working with Postmark's API also went pretty smooth during the development. First, I used ChatGPT to generate zod schema and typescript types for the webhook data coming from email inbound. It was pretty simple, just copy paste'd the webhook example from Postmark's website and told ChatGPT to write the zod schema. After that I use Requestbin to mock the POST webhook data, trying to figure out how effectively fingerprint the webhook to tell which mail goes to which user. It took me a while to realize that "plus addressing" doesn't really affect the destination of the email, which turn out what was I needed (yes I'm an email noob).

All in all, it has been a fun 2 weeks working on this challenge. Many thanks for Dev.to and Postmark teams for organizing this!
???



Источник:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

 
Вверх Снизу