ABOUT

Unravel is my project to reengineer the internet (DNS and up). It will replace messaging, chat, social networking, search, media and file shareing, and a whole lot more. It will be open source and alow anyone to build anything they want on top of it. It will be built to be secure, and provide privacy, veryfyability.

At its core Unravel will be a mesh distributed database with an API to access the data. It makes heavy use of checksums and ECC encryption for encryption and verification. It is written in C for maximum preformance, and is built to run on anything from an enbeded device, to a phone, a PC or a super computer.

I dont like what the internet has become. Especialy I don't like the cloud. Today most comunication online happens using walled guarden intermediaries who store and inspect and triage everything. There shouldnt need to be any intemidiares to do any of the things we want to do, but right now we have to. I think that who controls information matters. I think that privacy matters. I think the user should be in charge, of what they see, and who they comunicate with, the software they run, and what information they store and share.

Maybe the rest of the world dont care about any of this. Maybe everyone else is happy with the internet we have. I'm fine with that, I'm just not fine with there not being any other options. Thats what im doing, I'm building another option, because I can, and because someone should.













SOL 76

Finally back after a long time off doing other stuff. My first day back was just trying to figure out what files contain all my dead previous attempts and what contains the final working version that I am happy with. With the progress i have made on the data storage, I need to find a new problem to attack. While Unravel is meant to be a networking protocol I want to hold off on networking and focus on local storage for now. The first issue is how does a application talk to the local storage. The first and obvious way of doing this is to have Unravel live as a background task and have each service it uses be a dynamically loaded library. I will do this, but for a lot of applications this wont work. Many applications will want to be their own separate processes and talk directly to the unravel process. This means some kind of shared memory architecture (Something i have never done before). One way to do this would be to use local host networking, but I'm afraid this wont be as fast as i want it to be. Another solution would be to have more then one possess access the file storage at once, but that feels scary and could possibly be slow if multiple apps want to load the same files at the same time. I want to put as little trust in the speed of the file system as possible.

On the subject of file storage Intel has sent me an Optane SSD card (And a fill computer to put it in since i dint have a free PCI-Express slot) and I'm going to use that as a test platform to try to figure out how to use that sort of low latency high bandwidth storage that looks more like memory then storage.

Another interesting problem is indexing for search and multi user. I want to have a system where each record is encrypted using a key and that the user provides. My goal is to not ever store a list of users anywhere. If there is no list of users, in combination with having loads and loads of users that are nested, and while storing some garbage data, will make it very hard for a forensic to find out if all user data has been found. Lets say user X has four accounts, 3 of which has benign data and the last one contains a secret. If the user gives up the passwords for the 3 first accounts, it should be impossible to prove that a 4th account exists. It should also be possible to protect data from malicious or semi malicious software. Lets say you have module that looks at all your text and makes translations to a second language. You want this tool to be able to read most stuff you write, but since it may share this information with other users to build a better translation database, you may not want it to be able to read all things you write. If it could read everything it would essentially become a backdoor to your data. I'm contemplating building a DLL interface first just to have something i can work on.




SOL 75

I want to store data that is as easy as possible to parse and understand. ideally if two people would try to save the same data in two ends of the world they would store it in a compatible way. obviously i cant sit here in my ivory tower and define how all data that there ever will be should be stored, but hopefully i can help a little bit. I have two schemes for how to accomplish this: Extended data types and templates. Ill leave templates for a future post and focus on my extended data types.

Unravel at the time of writing supports a number of base data types:

K_VT_INT8
K_VT_UINT8
K_VT_INT16
K_VT_UINT16
K_VT_INT32
K_VT_UINT32
K_VT_INT64
K_VT_UINT64
K_VT_REAL32
K_VT_REAL64
K_VT_REAL
K_VT_TIMESTAMP
K_VT_USER
K_VT_ID
K_VT_STRING

Most of these would be recognized by any C programmer. I have added a time stamp consisting of 64bit signed seconds since the year 2000 (its slightly annoying that we cant start at year Zero since no one seems to know when that was) and 64bits of fractions of that second. Given that it covers well before the Big Bang it should hopefully outlast the use of unravel :-) . I also have a universal pointer and a local pointer within a document, 256 and 8 bytes in size respectively.

I may want to add more data types in the future but i also want to limit them in order to make the implementation burden as light as possible. My implementation will do a bunch of clever conversion to make it easier too. However these are all very low level, and says something about how a value is stored but nothing about what it means. Unravel supports being extended to up to 256 data types (god hope we will never get anywhere close to that), but i also have support for extended data types that gives you 32 bits of data types. That gives us 4,294,967,295 data types. They will all store its data as one of the 256 base types, but will each have their own meaning. That gives us 16.8million possible sub types of each base type. I can specify each and everyone of these their own meaning. Some will be defined one by one and some in larger blocks. Individual subtypes of the string data type may be things like:

-FirstName
-LastName
-Tags
-Headline
-Address
-URL
-Message

All these are text but have different meaning and may have different interfaces. In some cases i will assign large blocks. The SI unit system is composed of seven base types with multiplied with exponents ranging form -4 to +4. That means the entire space of possible combinations take up Nine to the power of Seven or 4782969 combinations. I will probably mirror all these types between 32 and 64 bit floating point numbers. I also plan to encode all 3 letter codes for all possible Currencies as extended types of the Real type.

Its a bit scary allocating numbers of types. 256 base types is a lot, and giving away 4 million on float types out of just 16 million in one go feels like i could run out of extended types quickly. I could obviously allocate the subtypes dynamically so that each base type doesn't get the same number of sub types but this would make it slow to convert for extended type to base type and could make backward compatibility tricky. If i add base units, then implementations will need to adopt (something i would like to avoid), but if i add new subtypes (something i assume i will do all the time) if they are in predefined ranges then implementations wont need to adopt.




SOL 74

Big progress. I'm finally packing and unpacking a data structure and storing it. I can commit and discard changes. The API to do so in embryonic at the moment but given that this in my 4th implementation and that I'm fairly confident this ons will stick, thats a good step.

The biggest design decision, that I'm increasingly confident with, is to pack and unpack data as i go. Normally if you have a data structure on disc, you load it in to allocated memory and put in in to structures. (one can store the data in the same layout on disc as in memory to avoid packing and unpacking but endianess and padding makes that a no o for my application.) The problem i have is that loading in a structure in ram requires the data to be smaller then the available ram. Since Unravel is designed to handle obscenely large datasets that doesnt work. we therefor need to be able to read data incrementally. For small data one could probably make a faster implementation, and i may do this (since most data is unlikely to be big) but then i will have to deal with the edge case of when small data grows and becomes large data and the complexities of that, so that will have to wait. I also want to start profiling before i decide to optimize. The cash coherency and data locality of packed data could in theory make it competitive with unpacked data.

the incremental read creates some real complexities in the code, the API and may also affect the performance. Ill probably talk more about performance later, but a lot of it comes down to usage patterns. Since i don't know how Unravel will be used (or abused) i will have to do a bit of guess work, and a bit of light encouragement. I'm trying to make an API that is both easy to use and that gently guides you to do things the fast way.




SOL 70

I use a lot of 64bit numbers in my data structures for IDs, pointers and so on. The numbers needs to be 64bit in order to handle very large data sets, but at the same time it will be very rare that that amount of precision is needed, so for 99+% of data structures that will be a big waste of memory. in order to deal with this i have implemented a expanding data type.

The data type begins with a single byte. the 2 first bits of this byte is used to signify the size of the number. if 0, then the est of the byte is used giving us a range of 0 - 63, if its set to one, we add one more byte giving us a range of 0 - 16383, if set to 2 we add 3 more bytes to get a 30bit value, and if set to 3 i add 8 bytes to gett the full 64 bit range.

Right now im arguing if i can save one byte for option 3 and only add 7 bytes and use the 6 bits left from the first one. This would limit the range to "just" 62bits. I have a feeling this limit may not actually be a limit in a lot of cases, since the number often stores the count of things rather then its size.




SOL 64

Ive made some major progress. I have now conected all 5 layers of the data storage system. a lot of work remains but im starting to see the end of the tunnel when it comes to the data storage system. I need to run an awfull number of tests to make sure it is working propperly in all cases, but at least it works in some cases and that is a major milestone.

The complexity comes form a few basic design decissions made early on. The first is that i want the data base to be able to handle big as well as small data very efficiently. By big i simply mean data so big that it cant fit in ram. This means that the data needs to be segmented. Once you start segmentng data you can mame some pretty nice optimizations like for instance, if you update only a small peace of a large data set you only need to load modify and save that bit. An operation like an insert can be made significantly faster in a segmented array then a continius one. However, since most data will be small enough to not have to be segmeneted you dont want the segmentation to take up any potential resources when its not needed.

The second feature i want is that the data can be made regressive, meaning that you can store multiple versions of the same data set. If you combine this with a flexible segmenetation system, you can make the storage very eficcient as only chages needs to be saved. this will also come in to play later as i develop the networking, where you may want to broadcast changes to a document rather the whole document.

Next I want atomicity, this means that the user needs to be able to make multiple changes to the datat set, and then apply or dicard them. this means that there needs to be a temporary storage facility.

This all turned in to 5 layers:
-Block Storage.
-Block segmentation.
-Temporary modification storage.
-Layer to merge blocks in to large enough arrays to be parsable.
-The data structure defining the content of the record.

There will be a 6th layer at some point that makes all this accessaebele from libraries outside of the database executable, but compared to all this, that is trivial.

Now that i have something working i can start organizing, optimizing and testing this, with some kind of confidence im on the right track an that all basic requirements are met.




SOL 62

Thinking about programming is underestimated. I have been thinking about programming and i have decided that I'm going to redesign my records... again. I think this is the 4th design.

Here is my thinking. I started thinking that i wanted some kind of Key-Value store. Pretty soon i realize i want structs and arrays of structs. Then you realize that you want to import data coming from files like XML and JSON. The problem with these is that they don't really store even sized arrays, and that means that access times for large data sets are terrible. XML and JSON also make some assumptions about how data should be structured. For instance they aren't too good at expressing a graph. I want a data structure that can both be very efficient even if its large, and can express all manor of structures. One solution is to make a very simple data structure, and then since the data structure can express links, you could just link together multiple structures. In this case I could abstract away the links. How ever I'm going to assume that looking up a link can be fairly resource incentive. A different record is much less likely to be local, and may require a slow network look up to be found. If this is abstracted, the performance of a traversal of a data structure would be wildly unpredictable and failure prone. It makes sense the have each record be self contained. Then you know you have all of it, and you wont accidentally traverse the entire internet.

So what then? C to the rescue! I'm going to create a data structure that is very C like. A record will consist of 3 sections: typedefs, allocations and data. The typedefs describes named types and structures. The second section will describe a number of allocations that correspond to the typedefs. Each allocation will have, a type corresponding to a typedef, its own id that will act as an address, and a size corresponding to how many array elements the allocation consists of. finally the data will be stored.

The advantages of a C style structure are that it combines flexibility with a being transparent about its performance properties. Due to the underlying storage system some things will be easier then in a normal C structures such as inserts. I like the idea that things that are easy in the API is fast and hings that are more complicated are slower. It will nudge users to do the right thing.

I'm sure you can all look forward to my 5 revision of my record design, but until then I'm relay happy with this one.




SOL 60

I think I have now come top the point where I (roughly) have an idea of all the bits i need to write. The problem i have now is that everything is so complex that it gets almosty impossible to write it all at once. I have to figure out how to make something simplified and then add features. I need to make it possible to prune my feature set to make something that i can test and then incimentaly add features one by one. I'm trying to write a list. Stay tuned.




SOL 57

Ive been thinking about it a lot. And i think that my solution is just to have access be slow. Here are my reasons. The main problem will bee the seek time, but the way the API is set up you set a read point and then read data. If you read data sequentially you will get just about same speed as before. If you read it randomly it will get slower but if the arrays aren't too big it will be fine. I also expect that the dynamic structure will be used for things like text, and that is something you are unlikely to have too many entries of. If you for instance have a list of names, its more likely to be less then a hundred then over a billion. Things like scientific data is would much more likely to be a real array where there is no search time.

For a while i was considering storing some sort of index with the data, but i think that is a bad idea. First of all i don't want it to take up any space especially not network traffic. I'm contius that what i am implementing should at some point in the future be possible to standardize and therefor be implemented again. If some future implementation or version of my implementation changes the way data is indexed, this wont be possible because the indexing is defined by the data format. Another reason not to have any indexing in the data is that it makes making changes to the data much more complex. Since i want changes to be networked, a small change in the data could generate a lot of traffic describing how the index needs to accommodate the change.

So no index? Well that's not good enough either. Saying "no one will ever have data big enough that linear search isn't fast enough" feels eerily like an "no one will ever need more the 640k memory" decision. So what i am planing on is to have an optional index that can be stored with each data structure. This means i could implement more then one type of index in the future and be much more flexible with how i implement it. One great thing about a index not included in the data is that i can off load the indexing to a separate thread. This would create almost "just in time indexing". As i assume hardware in the future will have lots of threads with little to do this might be a very good thing. I'm going to keep thinking about this.




SOL 56

I'm going in circles. Ive been working on implementing the record system and I have implemented all of it but it doesn't work the way i want. Every time i start implementing a module i do so only to realize i have to redesign the previous module. I think it comes down to this, Imagine you have the following record:

struct{
    int int_array[4];
    struct{
        float a;
        char b[7];
    }struct_array[5];
}MyRecord;

It should be very fast to access, as its easy to compute the offsets to where all members reside in memory. Now if a user wants to change the array int_array to have a length of six instead of four we will need to insert a bit of memory. This i have implemented. I wouldn't say its straightforward but its doable. once its done the structure runs fast again. But what if we want to change the array size of "b"? Then we need to insert/remove a bit of memory in 5 different places. That's OK, as long as it is 5, but what if its 5 million? Well it may be slow but if it needs to be done it needs to be done. Now imagine b is a bit of text and we use to store names. If some ones name is Schwarzenegger, its not going to fit, so we need to make it large enough to fit it. The problem is that if we insert enough space to store Schwarzenegger we do it in 5 different places, and that is slow, plus if everyone else is named Bob, Mia and Bo we are wasting a lot of space.

It would be much better if the 5 different b's could each have their own size. If we do this we cant use normal array look up to figure out the array offset in memory. We could parse the memory to find the place but that would be slow if struct_array happens to have 5 million members. we need some sort of look up table. a look up table will never be as fast ass an array look up, so we probably want to distinguish between arrays with fixed size members and dynamic size members.

I need to think about this some more. In the past my thought was that if you need something with a dynamic since you could just add a pointer and point to another structure. But pointers are big (256 bytes) and somewhat cumbersome and slow to follow. If i limit what you can do with one structure i may up forcing my users to use way more pointers then i would like and then everything will be slow again. Given that i have code that very effectively cant do inserts it does makes sense to let users use it for things like text.

I'm going to end up adding one little flag, that is going to force me to rip everything under the hood up. Its painful but its going to be worth it. Now i just need to figure out how to do a fast look up table that is both fast to access and update....




SOL 52

My web site is finally on its own domain (nice one right?). A big part of the project is about replaceing DMS, ergo I'm working hard to rendering my investment in a nice domain worthless. I've even added a "about" button in cas some one wants get a brief explanation of what the hell im doing here.

Ive been busy with some other stuff so things have been slow, but i am now propperly movinmg forward with my translator API. The main issue slowing work down is that internaly Unravel will have so many different intermediate data representation. Part of me thinks it would have been better to limitmyself to a single representation and then add more in the future, but I also know that If i didnt solve all representations at once i would likely miss some constraint and end up having to rewrite everything.




SOL 46

Everything is now much better. I have built a unified memory merger that should do the trick for all the things i need. I have also created a incidental memory reader that lets a user read variable amount of memory while only locking an object once.

Right now I'm working on how each record stores data. Its complicated because data can both be very small (one byte) or very big (terabytes). The right solution is to have several ways of storing the data. One trick I'm using is the have a pointer pointing in to the data, but if the is as small or smaller then the pointer i can have a union with the value and use the memory used for the pointer to store the value. This saves me space, a level of indirection and an allocation. It may sound like a small optimization but my feeling is that the majority of values will be small enough to fit in a pointer, so optimizing for it makes sense.

When dealing with large sets of data that requires allocations it makes sense to first specify the format and layout of the memory and then fill out the data. This is however often not very convenient. Often you already have data and want to make a change to it. Let me give you an example of an issue. Imagine this data structure:

typedef struct{
    char name[16];
    struct{
        double distance;
        uint count;
        float value;
    }array[1000000];
}MyRecord;

To modify this we then call:

record_member_remove(record, "distance");
record_member_remove(record, "count");

Give the size of "array" removing members from the struct is very costly since you will need to re arrange the memory. This means that when the user asks to remove the member "distance" we don't want to do all that work. Especially we don't want to do that work since the call after removes "count" and the data has to be re arranged again. In fancy we don't need to rearrange the data at all, until we commit the data top be stored. At that point we most likely have to copy the data anyway so the re-arrange can become almost "free". This however doesn't always work. Consider these three examples:

   &/* example A */
record_member_add(record, "new_thing", "array", FLOAT);
record_comitt(record);

   &/* example B */
record_member_add(record, "new_thing", "array", FLOAT);
record_member_set(record, "new_thing", 3.14);
record_comitt(record);

   &/* example C */
record_member_remove(record, "distance");
record_member_add(record, "new_thing", "array", FLOAT);
record_member_set(record, "new_thing", 3.14);
record_comitt(record);

In A we add a new member to array, but that doesn't mean we have to re arrange the memory until committing. In example B, we create a new member and then store the data in it. This means that there needs to be space available for if, so record_member_set will triggers a re-arrange. In example C we add we first remove a member, then add a new value. Given that the old value took up more space then the old value we are able to use that space to store the value given in record_member_set, but we can avoid a re arrange until commit. All this obviously makes everything very complex... But at least I'm making progress. I wonder if other databases do these kinds of things... Ive heard horror stories but I don't really know how much of it is true. I have a feeling they dont pack the data as densely as i do.




SOL 40

I have tried for about a week to design and implement my records, and I have come to the conclusion that it cant be done right with the code i have, so I'm about to throw away almost all my work and start over. It sucks.

Here is the problem: I want the records to work like this; you have an existing record, and you retrieve it from storage, you make a bunch of changes to the record, and then you apply the record to your storage. Easy right? No not at all if the record is big. Let me give you an example, lets assume the record is 10gigs large and the change you want to make is one byte. First of all just loading in the entire record in to ram is not feasible or at least not effective, Then writing all 10 gigs just for the change of one byte is clearly not effective either. Now lets assume that the user gets the record makes a change and then cancels the operation by not saving the change, then a lot of work has been done for no reason. OK So the solution would be to just store the changes, and then apply them to the record, when the user actually applies them, right? Yes, but that is not enough, What if the user adds 10gigs to a record? Storing all those changes in ram is not good enough. You will need to store them in the storage system that can put things on disc when need be. OK so THEN we have a good enough system right? No, once the have added the 10 gigs of data to the file storage and you want to use that data to create the new version of the record, you don't want to load them from disk in to ram so that you can put them back to disk. You want to use the fact that they are already on disk and make the new record just point to the data that has already been committed to disk.

This breaks my storage system. So i have to rewrite it. It sucks but that's the way it is. I'm working on specyfying what the new system will look like.




SOL 37

I'm working on the data format and 2 major issues have been occupying my mind. The first deals with communicating changes to the internal data format. If you have a record you can request the current version of the record. For most applications this is all you really need, but in some cases you want to subscribe to a record and get notifications if it changes. Now the question comes, how do you describe changes? in the data store changes are described as inserts. Each insert contains a start point and end point, a size of the insert and an array of bytes to be inserted. So far so good. With this information you can go from one document version to another. Easy right? But you get in to trouble when we get to arrays. consider this array:

data[a]

The data it self is continuous so if we want to add 5 more values we can do so in one command, the problem is that we also must change the 'a' value that tells us how long the array is too, otherwise the record is in a invalid state. So this tells us we need compound changes where we need to do multiple inserts in one go. That's all doable, a bit more complex, but doable.

Then we get to more complex data structures like arrays of arrays:

data[b][a]

Changing 'a' now is much harder since we must also change 'b' number of arrays to make the data consistent. Now the compound change needs to be a fair bit more complex. All of this can be solved by just adding more inserts and i guess that's fine, but it irks me a bit especially since inserts might no longer be the most optimal way of describing the change.

Another big issue is that an owner of a record can accept suggested changes from other users. For this to work there needs to be some form of mechanism where an application can receive the changes and then accept or deny them. The obvious way to do this would be to present the owner with two records, the old one and one with the suggested changes. Then the owner can use what ever criteria they want to establish if the change is acceptable. Its simple, but if the record is massive, say terabytes of data, creating an entirely new copy of the data and then have an application search all that data only to find a single byte has changed somewhere is hardly optimal. It would be much better if the system could describe what has changed, but then the description API becomes much more complex.

I can with lots of work parse out what the changes are from the inserts, but I still have a feeling there might be a better way. I worry a lot about; on one hand making the API very complex, and on the other hand making it slow.

The truth is I don't like my options, and that makes me think maybe there are better options out there.




SOL 35

Third day of thinking and I now think I have a plan. Instead of just having records be locked by a user preventing any other user from accessing them I plan on having 4 different ways of locking a record.

Read only lock.
A read only lock is trivial to implement since the underlying memory manager can has built in regression. When someone locks a version for reading, we can simply keep that regression around for as long as wee need to. A read therefor cant in anyway hinder other tasks from reading and writing to the record.

Ownership Lock.
A Ownership lock gives you complete control over a record, but is dependent on access to the private key of the record. Only one thread can take ownership of the record at any one time. An ownership lock also lets the thread set the rights for the object.

Modify Lock.
A modify lock is a lock that gives you the same capabilities as the ownership lock but doesn't require the private key. The nice thing about a Modify lock is that multiple threads can lock at the same time. The system will then arbitrate the changes made by the different locks. This means that the changes can be rejected. The modifications that can be made to the data structure can be controlled by options set by a thread with an ownership lock. The owner of the record should even be able to inspect and then reject or approve each modification using some sort of callback mechanism.

Clone lock.
A clone lock is a way to create a Ownership lock by simply creating a new record by cloning an old one. All data that is stored by check sum only is clone locked.

The nice thing is that once you have locked a record the API for reading and writing is the same no matter how you locked it. The only real change is that some actions may be rejected. The arbitration system is going to be complex to implement but i think having the ability to have multiple users collaborate in realtime with data is important enough to be worth the extra work. (Plus since I have written Protocol that can do it in the past, i cant really defend building a new protocol with less features...) Its also worth mentioning that most of thees rules only matter if you get multiple treads working on the same record, and that will probably be a rare thing.




SOL 34

I have spent the last two days without writing a single line of code. I'm focused all of my time trying to figure out how the API for reading and modifying records should work. I making progress but I'm still not done.

All the work i have done with the regression system helps a fair bit. If one thread is reading data it is possible to keep around that version and keep it consistent even if an other thread makes changes to the data at the same time.

In a normal concurrent system you want a thread to be able to take owner ship of a resource, make changes to it, and then release it. In Unravel this wont be enough. Owner ship is not just an issue of multiple threads fighting over data, its an issue of multiple computers on the Internet all being able to suggest changes and then having an arbitrator who controls what changes are allowed. I have written this kind of thing before with VERSE but the challenge is to make it very transparent. Verse was written for Low-Level 3D graphics people, Unravel has a much wider audience.

Something I am working hard on is trying to avoid multiple API paths. I want the API to force you to do the right thing but at the same time not over burden any application development. If features are optional then they wont be supported. If there is a slow path that is easy to implement and a fast that is harder everything will be slow. The risk is that if you add a bunch of requirements, then people may opt to write no code at all.




SOL 31

Progress! The memory segmentation is pretty much done. I do want to optimize the regressive data storage, and i have a feeling that at some point in the future I will want to optimize how memory is stored on disk. Im investigating the optimal way to read and write data from SSDs.

Ive started sketching out the new record API, but i will need to figure out how the data will be stored before i can start implementing stuff. I expect this part will be much easier then the memory segmentation stuff.

Today, i will just work on cleaning up the mess i have made testing my memry segmentation code...




SOL 28

A bit of time warp here. I ve been gone for a bit of New York and San Francisco, but now im back focusing on Unravel. In fact I now have some time pressure. I have several possible uppcomming projects where Unravels data store could be used, and I much rather use the projects as a test bed for unravel then to hack together somethjing throwaway. I dont know when or if it will happen but I want unravel up and running sooner rather then later.

First day of coding whas basicly just spent looking at the code trying to understand how it works, thats done and whats missing. Its some really complicated stuff. This morning i got in to testing and weorking on the regressive data store. I worked on it for a bit and realized that my lowest level memory manager nbeeds to support realloc. The main reason is not to avid moving data (something realloc can do using remaping the memory layout in the MMU) but rather to be able to change the sice of a data chunk while keeping the id. This means i dont have to update all the places that reference to the memory, that will simplify the code and potentialy be a big speed up. Ill get to work on that tomorrow.

Viritualizing memory has this interesting psycological efect that any access to a new memory block can potentialy be very slow. It really makes you very paranoid. The probnlem is that the preformance entierly depends on the access pattern of the user, and thats something you cant really predict.




SOL 26

No change. I still think its brilliant.




SOL 24

I have totally changed my mind. I really don't mind endianess. So first of all i will need to do a single memory copy no matter what. The reason is that i don't want an application using unravel have to care about locks, and therefor everyone gets their own copy of the memory they need. No app can hog memory and prevent other apps from accessing it.

Now to explain why I no longer mind endianess I need to explain a bit about the kind of API i want to build. Any time you have any kind of data storage you want to access you will need to write transformation code that wrangle the data on to the structures you need. To fix this i had this idea that if the data was stored in memory the same way you would store the memory in a struct you could just get a pointer to the memory, cast it to a struct and go. Since anyone can store data in Unravel any way they want you would need to have some kind of test to make sure its the right data that you are expecting. This will be needed for all kinds of things, so its not a bad thing, the problem is that it makes everything very fragile. If the data isn't stored exactly how your code expects it to be laid out, you code will need to use some slow backup path. I really dislike this because it means you have to write a fast and a slow path. Most people will just implement the slow path that only works, and then only if they really want to optimize they will pick the fast path. I know that's how people do it but i hate it.

So f you need to copy and transform data every time you read or write it, why not let the user define what it should be transformed on to? Instead of getting memory and hoping its in the right layout, why cant the application just tell the API what format they want the data in and have the transform do it for you? If it always requires a transform anyway, user transform becomes free. The code could look like this:

typedef struct{
    char *name[16];
    uint count;
    float value;
}MyStruct;

void read_data(UnravelRecord *record, uint count)
{
    MyStruct *memmory;
    UnravelTransform *transform;
    uint i;

    meomory = malloc(sizeof(MyStruct) * count);
    transform = unravel_transform_create_from_code(record, "char *name[16];"
                                                "uint count;"
                                                "float value;");
    count = unravel_read(record, transform, memory, count);
    unravel_transform_destroy(record, transform);

    for(i = 0; i < count; i++)
    {
        printf("name %s/n", memory[i].name);
        printf("count = %u/n", memory[i].count);
        printf("value = %f/n", memory[i].value);
    }
}

I have never seen an API do this. I think its brilliant. Atleast i do today. We will see about tomorrow.








SOL 23

Got nothing done. Today big / little endianess, hit me like a ton of bricks. It ruins everything. Usually i pack all networktrafic in all my protocols big endian, and that is fine but here im running in to a wall. Normaly a protocoll is something that just transfers meaning that it doesnt consern it self with how the data is accessed once it gets where it is going. Unravel is different. Unravel is designed to be usable as extendable ram. Accessing local and network data is done with one API and to truly make the network transparent, the local data access speed must be comperable to an application storing it in ram. If accessing data is slower then building your own structures in ram, users will have to start cashing data in their own data structures, and thats when the networking becomes some thing you need to work on rather then a free benefit of the memory manager. This fairly extreme preformance requirement makes everything very dificult. My first thought was to try to store all local memory in machine order. But that requires me to knwo the layout of the memory when sending it in order to be able to re-order it. Storing data in alocal way also messes up how checksums are computed (they may not be computed right away, again for preformance reasons). I also realized that you probably want to store data without padding, so some sort of local transform would be required. I now have come to the conclusion that I can probably bild very fast lookup tables that can be used when copying data to solve this. Im fairly sure it will be slightly faster on big endian then little endian machines. I will investigate further.








SOL 22

I have solved a lot of things. I now know how to manage my different data types and how to refference count them. A big issue was that i couldent store reference counts, together with the data being referenced since that would mean that updating the referenc count would force the underlying memory manager to swap in ther data form disc. I decided to keep my reference counter in the index instead, but that meant that it wasnt as fine grain as maybe would have liked it. I think I have strategy for solving that.





SOL 19

Im running in to trouble. I want Regresion to be a core part of Unravel. If you make a distributed database you want others to be able to change data, but you dont want them to be able to destry data so ypou will need versioning. If you have a large data set and you make many small changes to the data set it makes sence to not store a copy of each state but rather a change log. This complicates the data structure a bit. As long as the history stays intact its fgine, but what happens when you start refferencing time stamps? This can create branching. This creates a lot of complexity especialy when you want to prune the data set. I will need to take some time to think about how to deal with this...





SOL 18

Time is getting very short as I'm aproaching GDC. I really want to complete my memory management before I get interupted by other stuff. As soon as I'm done with the memory manager I'm going to get in to the data base interface. Even though this is going to become a network protocoll I'm probably going to spend months before opening a socket. Im trying to read up on how people like to access databases (and how people ruin any preformance in them by using json and http...).





SOL 16

I finally completed the static insert code yesterday (all 16 paths). Ive been spending the day fuzz testing and cleaning up the code. I feel like i finally have control over the code. I have loads of ideas on how to improve and optimize but at least i have something working. The big missing bit right now is the regressive memory manager, but i have an outline of that too. Fuzz testing has been fun.





SOL 14

I bought a Google chrome cast audio for my mother this Christmas. Its a wireless device that connects your computer to your stereo. Its essentially a wireless cable. To install it you first had to download the chrome browser, the you had to go threw a bunch of steps to connect link up. When I was done i had connected my moms stereo to... a browser. Yes the only way to play anything was if it was in the browser. Looking around to how to play my mothers iTunes library of music Google was happy to inform me they had set up an entire store for apps that she could use you buy all her music again so that she could play it wirelessly. After looking around a bit i found a blog where someone told me it was possible if i only patched the OSX kernel, and then ran someones node.js script. To do this I needed to install node.js, and in order to do so i had to download xcode, and in order to run the script i had to be super user, because administrator was not enough. In the end it turned out that OSX wouldn't let me install even as a superuser.

All this made me think of all the old businesses that Internet companies used to mock. Brick and mortar companies and rights holders who "just didn't get it". That's where the big Internet companies are right now. Sure they will deliver, but only if you exclusively use their services, their software and hand over all your data. It all works, but only if you do exactly what they intend for you to do. How fast they became more interested in their own interests then serving customers.

Do you want to know how to kill a big company? Find their profit center, and then base your competitive advantage on NOT going after that money. They may understand that their revenue will go away, but psychologically they will never be able to let go of it.





SOL 12

What is beautiful code? Most computer scientists seams to think its the fewest lines of code. The Linux Kernel mailing list FAQ defines bloat as the size of the executable. If that is the case I'm writing ugly code, because i consider beautiful code the one that uses the least number of cycles to be the best. Right now I'm working on the memory management and there are many different considerations as to how to store memory. You want to eliminate as many look ups and memory moves as possible and at the same time handle very large sets of data. This means that i have ended up with 4 different memory representations. On top of this all 4 representations can be regressive and that gives us 8 representations. They all need to be compatible and invisible to the APIs using the memory managers. When converting from one to another I cant simply first convert them to a common format since the only format that can store the full range of memory is also the slowest one. The end of it is that i have to implement an awful lot of paths and find the optimal way of doing everything for each one. Happily, I have at least solidified each of the formats.





SOL 9

So how much of the internet do i plan to replace? My plan is to be able to go a week without using the web and not notice it. Think of all web sites and sevices that dont deliver yopu something IRL. I guess thats a fair bit.





SOL 8

Its a curious thing to be able to overrule the world. Technology and science does that. It makes something possible or impossible no matter any law or opinion. Its a powerful thing. Why argue right and wrong, when you can just make it right?





SOL 6

I purposefully not writing a post today just to prove that I'm not under any obligation to write every day.





SOL 5

Here is what I'm thinking: Lets assume we don't want to divide data in to chunks no larger then 64 megs (26 bits) when we load them in to RAM. If each memory chunk has an address 64 bits. That means that if we make an array of addresses we can store 8 million (23 bits) addresses in to one chunk. 8 million times 64 megs adds up to 512 terabytes of memory or 49 bits of address space, 17 bits shy of a full 64 bit space. To fill a 64 bit space we need arrays of addresses to arrays of addresses, especially since Chunks will be able to be significantly smaller as you will see below.

Lets assume we have an array consisting of 3 chunks we can call A, B and C. Then we want to insert data X somewhere in the middle of this array, at a point that happens to be in B. Lets start off by saying that we do not want to reconstruct all data in to entirely new chunks. We can do this in a few different ways. One is to simply record where the insert happens and then store X. No data needs to be moved for this action. One win is that we now store regression since we both have the data before and after the insert and can construct the data at any point. This is something you may want but more on this later. The problem is that if we pile on enough changes retrieving the current (and probably most desirable) state becomes slower and slower as the genealogy takes longer and longer to reconstruct.

A different solution would be to split chunk B in to 3 parts, B1, the part, if any, that comes before the insert, B2 the bit that gets over written, if any, and B3 the bits after the insert, if any. We can now construct a list consisting of A, B1, X, B3, and C, to describe the new state. This is much faster to reconstruct. If we want to store the before state all we need to do is create a list containing A, B1, B2, B3, and C. The downside to this is obviously that over time we may fracture the data significantly. If we don't want to keep around the regression we can optimize this by merging chunks. Merging of chunks can be done in parallel with the use of the arrays. I think this is the best approach.

One issue is that all data in Unravel is referenced by checksum, and computing a checksum can be very slow if the data set is very large. If a single changed byte forces the re-computation of a new checksum on the entire data set this could very easily become an issue. This is not good enough. So I have some ideas. For real time data it should be possible to send out the changes rather then the entire data sets. In this case the checksum doesn't have to be updated that often.

Lets say Alice has significant data set A and has shared that with Bob, and she makes a minor change to it, turning it in to B, can send just the change and bob can reconstruct B. Lets assume that Carol also has data set A, but is off line when Alice makes the change. Since Alice has very little memory she has gotten rid of A and the change and now only possesses B. Carol now wants to update her A in to B. If the data set is large enough she will ask Alice to give her a list of the chunks it is made up of, so that Cecil can download it peace meal. When this list arrives Carol will notice that she already has most of the chunks, so she can reduce the downloads needed significantly.

I think this is how I will do it. Right now I'm trying to figure out what the best rules for chunk sizes ought to be.





SOL 4

I can now read data arrays that are around 16 exabytes large. (obviously you can store more things if you divide them in to more then one array). I can make very large arrays too, but only in the range of thousands of gigabytes, since the pointers eventually become so large that you cant keep them I'm memory realistically. I will remedy that tomorrow when i will work on being able to overtime build up large data sets rather then define them in one go. This is exhausting, and it makes my brain hurt, but I'm making progress. I know its going to get a lot worse once i start with regression. Iv been debating weather to allow insert and delete or only over write, and i have come to the conclusion that insert and delete will probably be necessary and i will get some performance gains by supporting it in the lowest level. Much like an MMU can fiddle with address space when you call a realloc, I can do the same with the chunks that make up large data sets.





SOL 3

You might think that if you would redesign the Internet you would make it so that everyone has a identity backed up with a cryptographic key pair. You would be wrong. You wouldn't have one key you would have thousands. First of all you are multiple people, one for your family, at least one for your job, another one for what ever kink you are in to and so on and on. Often its a good idea to keep your identities apart. If you live in some countries your life may depend on it. So yes a few identities, but thousands?

Not just people have identities so do things and software. You may not want be able to know what software produced what data and and what data it has access to. Data it self can also have identity. In Unravel all data is identified with hashes. This means that if you have a file and i have file our computers can talk to each other about that file. We can for instance reference to an article and know that we are both referencing the same article. No need for a signature for any of this, just a simple hash.

Sometimes we want to reference to the Guardians front page and we mean a very specific front page at a specific moment in time. Other times however, we want to reference the guardians front page as an evolving publication that keeps changing again an again. To do this we need a cryptographic key that keeps signing the hash of the current front page. For anything that we want to be able to change we need an identity. Most things change, so most things need an identity, so lots if identities.

Right now I'm working on this universal link system where you can either reference to a publication or a hash. This universal pointer will be 256 bytes long and contain a 64 bit length value. I also have a trick where if the data itself is smaller then the check sum, then the data can be stored directly in the check sum. This will create a natural 248 byte limit where data can be embedded in to links. I'm sure someone will find some great significance in that number that speaks to the human condition. I expect it to optimize network traffic. Speaking of optimizations, all these links are implemented as single memory allocations. This means that I cant use structs, but I have to cast in to offsets in to memory. It does give me that warm low level feel only cache coherency can give.





<

SOL 2

That was fast. Apparrently the internet doesnt like to be changed. My Internew went down today. Oh, well. Visual studio still likes me. So work continiues.





SOL 1

I'm starting a new project today. I'm going to redesign the Internet. I don't like the Internet as it is, so I'm redesigning it from thew ground up. I have been planing to do so for a while, but now I'm going to put down some real time in to it. I'm starting pretty low-level. I'm building a database to store all necessary data. Why transfer data if you cant store it, right?

It needs to be secure, fast, thread safe and easy to use. As I'm a C programmer I'm not really sure what kind of fluffy ease of use today's high level programmers expect, but I want direct memory access where you can get a pointer straight in to a block of memory (did I mention that I want it to be low level fast?) but I also want to be able to handle data sets way larger then can fit in to memory. Whats the best way to get a pointer in to a 16terrabyte file?

Right now I'm trying to wrap my head around what limitations I'm required to live with. I don't want to be the guy who thought no one would ever need more then 640k. I considering having a limit of 8 exabyte files. I hope that's OK. This is one of the reasons why this is an interesting project. Its easy to build something when you know what is going to be used for. I'm buildings something that that is meant create things we cant yet imagine.