Good December all PALMers and readers beyond that category! What a wonderful few weeks it has been. As you are aware, last week’s recap was lost to the bit space in an accident involving Medium drafts. But no worries, for this week the Medium will be a grand one, going through two weeks of content!
In this Medium, we’ll go over the following topics:
- What have the devs been up to?
- WhatsApp Release & features
- Conversational contexts
- Google’s Gemini in our project
- Business Model put plainly
- Suprise utility and partnerships
- Revenue Share
PaLM AI — The course of development
So, the elephant in the room. Where have we been? At work — that’s where! And at what — well, a whole ton of stuff. Kind of vague? Let’s explore further.
Team 1 (Multiplatform Bots, Hardware Module & PaLM OS)
The first section of our dev team is at the heart of this Medium. How’s that? Because 80% of what we’ve done in the past week cannot be yet revealed in this Medium.
That won’t be an issue however since Team 1 has been completing a LOT.
The WhatsApp Release
It goes without saying this was an endeavour. Meta stopped supporting its SDKs for the WhatsApp API, and Facebooks API still has not fully integrated WhatsApp, leading to the only way to develop for the WhatsApp API layer being using simple HTTPS requests. This was achieved with native Node.js requests module. As the original Telegran bot is in Python and written with an abstractive SDK, the WhatsApp bot was basically a start from scratch.
Being a first mover in this space to hop on WhatsApp, we all felt proud of Team 1 for making this happen. Our bot got a rating “excellent” from Meta’s reviewers after the exhausting processes of sending IDs, company details, et cetera. It was all worth it in the end — soon PaLM Assistant can be used by billions.
Conversational contexts
What’s more frustrating than being stuck in a conversation with a person with no memory? Well, probably being stuck with a forgettive AI! That was PaLM on WhatsApp for a few days — indeed that is PaLM on Google’s official Discord.
What makes PaLM act like that? Well certainly not the model itself. The model simply processes input and output, and doesn’t hold a native cache per request ID. The conversational context cache has to be built manually. On Telegram, you can use the ctx (context) object. Easy. On WhatsApp? No such thing. The context had to be built manually to a database to last through server resets. Thankfully, our team can tackle such challenges.
Hardware Module & PaLM OS
A lot of doubt has been cast upon if the external hardware module we are building will be delivered. This makes the developer team happy to say that the prototype has been finished and plans with PCBs and lists of components have been sent to possible manufacturers. We will initially manufacture a sample of prototypes ourselves and send them to persons completely unrelated to the team for fair review and impression.
I have been working on this during spare time to get it delivered before Christmas - this was initially not in our plans, but actually spawned from a community joke (keep making them) and ever since we found it actually plausible, we have been committed to making this one part of our project, allocating resources into it.
PaLM Home System is a project name, as the final name is still under the works. The premise is however that a microcomputer-powered device will have cloud access to an AI, all peripherals required for physical & remote voice control and touchscreen compatability. It will have features from all versions of PaLM AI - you can combine the crypto-oriented features of Telegram with the user-friendly command free logic of WhatsApp all the way to live the image generation of the Discord version.
The custom OS is built with Yocto and programmed to be a plug-and-play system. It can be customized for specific needs. Our design of the device is extremely power conversative and lightweight. We will try some sort of compatability with Google Home system as well, but this will ultimately have to go through the cloud and functions may be limited. Every device will have it’s own wallet address for on-chain functionalities.
There is no way to currently pre-order, but we will tease you with further updates from this front of development in the coming days.
Google’s Gemini & PaLM AI
We thought it would be great to share information about this release, as it is relevant to the project & we want to put good and correct information out there.
The developer team is thrilled about Google's announcement of Gemini's release this month. It will certainly be integrated into PaLM AI, as PaLM AI is committed to running on Google's suite of AI tools and products and other services such as Google Search, YouTube and Maps which are already succesfully integrated in our products.
However, a question seems to be on peoples lips, which goes along the lines of "is PaLM not now outdated, as Gemini is being released"?
The answer is no. Gemini is a general AI systems model (multimodal) which has its LLM based components probably developed on PaLM, while vision etc runs on Vision or whatever DeepMind has developed. It's essentially a supermodel, which is why it's being integrated into phones etc. directly. Basically it is Google's competitor to GPT-4V, which is a multimodal model as well.
PaLM and Gemini are a duo. The PaLM model series will probably have a ton of iterations, and we will see the PaLM AI project utilizing every one of them. Meanwhile, we are cannot wait to use everything provided by Gemini. Multimodal input is a necessity for the future of AI, and this sort of a multimodal model so easily integrated into products that Google is releasing is without a doubt revolutionary.
PaLM AI's mission is to be, ever since the launch, the definitive AI chatbot on all platforms, accepting all sorts of input multimodally as we have demonstrated with being able to already process audio, text and imagery. Part of our entire motivation and excitement for the future of this project is we love what Google does and has done and we believe in their resources as much as we do in our own.
We also caution investors to watch out for quick "wrappers" as they're called that have zero innovation behind them and make sure what they are investing it actually makes sense. PaLM and Gemini are tools, and it is up to us developers to do our magic around them and utilize them for the best of everyone. Gemini is just less of a tool and more of a finished product that is going to absolutely take over, which is exactly what you would expect from Google. Meanwhile, there is still a whole lot of the ever-expanding universe left for PaLM to conquer.
We will use Gemini in PaLM Assistant on 13th of December.
Business Model of PaLM AI
The most commonly asked question is what is the utility of the token. Answer is simple — mostly revshare. We consider every holder part of our business, reaping 50% of the rewards and owning that half collectively. Our Business Model revolves around providing whitelabel bot solutions (custom bots running on modified PaLM), advertising through affiliate links and organic targeted marketing with our AI, and other sources as well. We want to run the project like a business, with our main goal to generate profit to our community through innovative solutions provided.
The AMA of 8th of December will be extremely defining. Revshare has been asked a lot about, but we wanted to first make the best and most gas-free solution that we can and to see if people were interested in supporting us as we did exactly that. The support will now be rewarded.
As said, Team 2, developing our surprise utilities, staking, and revshare, have been absolutely putting the Ethereum blockchain on fire with their development. As the upcoming dApps have more to do with the partnerships and we want all sides to give full green flags for launch and for everything to be joint and smooth, we’ve decided not to leak these just yet. However, the upcoming week will give all of these surprises out as well and it will be a thing to remember.
The end
Okay, for the sake of my fingers let’s wrap up! And what a better way to wrap up than to go with the classic: massive thanks for being, sticking to and hanging with us and as always - stay tuned for all we have to offer.