open source


Video to promote Climate Camp Australia.

This video was made with Kdenlive. I have to say, I’d never enjoyed using a program that crashes every 5 minutes (including system crashes) before using Kdenlive. It’s easy to use, and intuitive. Can’t wait for version 1.0.

The following is a discussion from #swig on irc.freenode.org – the Semantic Web Interest Group. It’s logged here if you don’t believe me: http://chatlogs.planetrdf.com/swig/2008-04-15#T10-32-11. Edited slightly for clarity.

I think the semantic web is an extremely useful tool, but as I mention down the bottom, I probably would have agreed with Francis Bacon that cutting up animals in the name of science was a good thing at the time. For the record, I don’t believe this.

Reading the comment first might help.

naught101: check http://www.semanticfocus.com/blog/entry/title/5-problems-of-the-semantic-web/

naught101: my comment down the bottom, would love feedback from anyone here

bengee: simplification is a feature, not really a problem

bengee: URIs and triples reduce the complexity to a level that computers can do useful things with it

bengee: e.g. <#product> :rating “***”; :rating “****”; :rating “**”. what might seem contradictory to you may be very useful to an app

naught101: na, I wasn’t talking about that kind of information bengee

naught101: Say philosophy for instance… let me find a nice quote

naught101: Only when the last tree has died and the last river been poisoned and the last fish been caught will we realise we cannot eat money.

naught101: * Cree Indian Proverb

naught101: Obviously a computer could use this sentence, but would it be able to use it a a way useful to humans?

naught101: Obviously it’s not TRUE, as most of us already know we can’t eat money

bengee: <#only1> a :CreeIndianProverb; rdf:value “Only when…” .

bengee: that triple could be useful for programs that list proverbs

naught101: yeah, sure, but that’s triplification ABOUT the proverb, not about the information contained within the proverb

bengee: well, then you have to increase the granularity if your app wants to provide richer functionality

naught101: how do you mean?

bengee: extract more triples from the human-readable text

naught101: but what triples could you extract from a one-sentance text that has no quantitative truth, but which holds more qualitative truth than many many paragraphs of, say, a science text book?

bengee: exactly

naught101: huh?

bengee: you may misunderstand what the semweb is mainly for

naught101: sure, that’s a true sentence 🙂

bengee: it’s not for implementing automated philosophers, or compete with humans with respect to intelligence

bengee: well, OWL folks might disagree with me here 😉

bengee: the more rewarding approach (IMHO) is to think about use cases that semweb tech *can* enable/simplify, not to think hard about things that are near-impossible for computers in general

naught101: I agree. but I’m not talking about what the semantic web should be

naught101: I’m talking about possible problems with what it currently is

naught101: I mean, I don’t want bite-sized chunks of information taking over the world of ideas

naught101: I think the philospohy or the public is degraded enough without chopping it into bits even more

bengee: oh, semweb tech can clearly improve the distribution and discovery of ideas

bengee: just like the web did

naught101: but it could also hide them

naught101: I don’t think the ‘web did, neccesarily

bengee: you just google’d WRT, no?

naught101: correct

naught101: I don’t think finding an accronym compares to finding meaning in life.

naught101: (if I sound like I’m attacking the semantic web, I’m not, I’m just exploring ideas)

bengee: yeah, don’t think I can contribute too much here, sorry.

naught101: no worries 🙂

naught101: I see it something like baconian/descartian science. it’s useful for finding out the little bits of information, but it’s not particularly useful for figuring out the interrelationships, or looking at the information holistically

naught101: I mean, for example, the semantic web can take information from a wikipedia article, but it couldn’t write a wikipedia article

kjetilkWork: right

kjetilkWork: I don’t think it is a very significant goal of the semweb to produce that kind of information

kjetilkWork: we have a billion people out there that can do that much better

bengee: the semweb can be a great aid in helping you write the article, though

naught101: sure, but thinking of the possibility the semantic web a large part of the web should probably include thinking about what it can’t do, and how to not impede that work

kjetilkWork: rather than the AI world of natural language analysis to reason and infere relation, I think the semweb is much more about using the collective intelligence of all its users, i.e. real intelligence

naught101: bengee: yes, it could. it could also be a hinderance (information overload)

naught101: kjetilkWork: good. I like that. I just hope we’re collectively intelligent, and not collectively stupid 🙂

kjetilkWork: well, that’s what it means to me, at least

kjetilkWork: hehe, yeah

kjetilkWork: I think semweb can help us be collectively intelligent rather than stupid, though… 🙂

naught101: I haven’t got that far yet

naught101: 🙂

There’s a huge wave of open-licensing sweeping the ‘net, and it’s starting to get into the real world. This is definitely a good thing – freedom of information is a great. The most common licenses, such as the GNU FDL, or the Creative Commons BY-SA stipulate that anyone can use the works, as long as they acknowledge the author, and that they keep it free (usually by using the same license). The last tactic has been called “viral” by numerous capitalists, and they are correct, it is. Eventually it will take over the world, or at least a large part of it. I can’t wait.

Creative Commons, and perhaps a few other licences, give people the option to license their work with a “non-commercial” (NC) clause, This is strongly derided amongst the free software movement particularly, as economic exploitation by a creator is considered a freedom and a right. This is argued well on the Freedom Defined wiki.

There are two main arguments against using an NC license, the first is economic, the second in a matter of compatibility. A third minor argument against the CC-BY-NC-SA, is an argument against creative commons itself. I will deal with these in the above order. (more…)

I just deleted about 100 photos from an ext3 external hard drive that I really would have preferred to keep. With shift+delete (do not pass the trash, do not collect $200). So I went looking for an answer.

If you’ve looked around the ‘net for a way to recover files from an ext3 partition, you’ve probably found lots of people saying “it can’t be done, because the inodes get wiped”. Well that’s true. There’s no way to simply mark the inodes undeleted and have your files back, BUT your actual files don’t get wiped, and if you’re lucky, you may be able to retrieve some or all of them.

First step: after you delete something accidentally, DON’T WRITE ANYTHING TO THE PARTITION. If you do, you are likely to overwrite the blocks containing your files. This means, if you deleted something accidentally from your root partition, home partition, or any other system partiton, un mount it immediately. This may mean you need to turn of your computer, remove your hard drive, and put it in another computer as a slave. I was working on an external hard drive anyway, so I didn’t really need to worry, as nothing would get written to it without me telling it to write to it…

Second, I recommend you read this: Brian Carrier’s “Why Recovering a Deleted Ext3 File Is Difficult . . .”. It’s where I got most of my information from. Most wise is the cry of “don’t forget to backup anything important”. Unfortunately, I was working on an old backup with no redundancy. ie. I was screwed.

Third, you need The Sleuth Kit, and you need Foremost, both of which are in the ubuntu repositories, and are probably available packaged for most distros. You REALLY need to read the man page for Foremost, and it would be a good idea to read the man page for dls, a sleuth kit program.

Lastly, you need some spare space somewhere. depending on the files you’re looking for, you might need a lot of space. I had an empty 40gb partition, so I used that, but in the end I only needed 180mb of space to retrieve 160mb of photos (this is probably NOT typical)

The Magic phrase:

$ dls /dev/sdb5 | foremost -T -o/media/MUSIC/parish -tjpg

Ok, an explanation:

  • dls simply reads a partition, straight out. Don’t ask me how or why. In my case, I needed to read /dev/sdb5.
  • The output of dls is piped to foremost, which read from stdin by default
  • -T timestamps the output directory. It isn’t necessary, but if the directory you’re outputting to is not empty, foremost won’t write to it.
  • -o<dir> is your output directory. In my case, a folder in my spare partition.
  • -t<filetype> is where I specified foremost should look for jpegs. Foremost actually looks for signatures such as the first few bytes of a file, to see if it matches a certain pattern. There are a set number of filetypes included by default (Read the MAN page), or you can create your own.

So that’s it. I ended up getting all 108 photos back, minus filenames, but that’s fine, because I usually just rename my files with the EXIF date/time data anyway.

Because dls only looks at unallocated block be default, the process ignored about 8000 other jpgs on the 95gb drive, and only recovered ones that had been deleted. Very handy for me. I left it go over night, but at a guess, I think it probably only took about 3 hours max (the hard drive is very full, so it probably only ran over about 5gb total).

fuckin’ yay.


so… it works, but it doesn’t join the blocks back together correctly, so I have my ~100 photos (jpeg files), but each of them is damaged about 10% into the actual image. I can still read the thumbnails, but not the whole image. If anyone has a good suggestion for how to fix this, please let me know.

ned

technorati: , , ,

Sharing your bookmarks and photos is all well and good. Discussing stuff on forums can be interesting. Editing Wiki pages is better – coming close to a collective conscious.

But the computer has always been an individualistic tool, which sucks. No matter what I do on a computer, it’s always just me. Fundamentally, a computer is used for making things; usually, art, writing, algorithms etc. when I do these things in real life, I can do them with other people, at the same time. This is especially true for music, and note taking, etc. This is much more difficult on a computer – mostly, the only way to so it is to write your bit, save it, send it to someone else somehow, and get them to check it, and send it back. I want to do it in realtime.

Googledocs looks like it could be it, but I’m wary of storing anything on google’s computers. Also, it’s proprietary software, so it’s hard to know what you’re really getting.
The other option is online whiteboards, of which there are a few. Inkscape has a whiteboard function, but it’s difficult to use so far, and may not be included in the next official build (0.46) (although you can include it anytime you build from source).

Basically, what I’d like to see, is probably something like googledocs running over XMPP, where each user has a locally stored file, which is updated when ever a session is started with both users, and can be updated in realtime, with the messages sent as short edits, not the whole file. Damn that would be cool.

One problem I can see with such a system is that if one user wants to edit something, then send it through another static communications system (like an e-list), you could end up with a few duplicates. That might be solved by somehow creating a unique fingerprint (not based on the file contents obviously, but perhaps the date and time, and the original title?), or simply by allowing users to diff the files, and choose which changes to include…

technorati: , , , ,

Richard Monson-Haefel’s “Open Source Is Anarchy, Not Chaos” on Biosmagazine.co.uk is an interesting article, but it misses some major points, and gets one completely wrong:

Richard is quite right about open source being anarchistic, but is way off in his description of how.

Let’s start with:
“All open source projects have a leader who is frequently, but not always, the founder of the project. This is well aligned with anarchy as defined above;”
Leadership is almost directly antonymic of anarchism. Leadership, while it often, in western thought at least, implies good judgement, and all that, also means power. In the way he describe it, with the “leader” having complete control over who commits, what goes in, and what stays out of the project, Richard is describing a tyranny, not an anarchy. This part of the system is completely unanarchistic, albeit only because of security issues.

The real part of open source, is the power of the community. Richard touched on that a bit, but missed the core part: the community actually has the power over the codebase, because if they don’t like what’s happening with the project, they can simply fork it, and start a new project, with an already complete codebase to work from. This basically means that the code follows the community, and not the other way around, as in proprietary software. This is the way it should be: the user should define the tool, not the other way around.

That’s where the anarchism lies in open source, in the community, and definitely not in the leadership…

it’s not a bad article, anyway. For another good article, in a similar vein, check out: http://ming.tv/flemming2.php/__show_article/_a000010-001239.htm

technorati: , , ,

I just upgraded from Ubuntu Feisty to Ubuntu Gutsy (which is still in beta). about 6 hours later, a couple of hiccups, and one or two fixes required for problems that I caused myself (and due to running out of hard drive space part way through), I’m running the new version. With all the same software still installed, and all my preferences and options exactly the same as before. Damn that’s nice.

The really beautiful thing? Barely any thing’s changed. A nice modification of the login dialogue for KDE (something that I’ve been wanting for a while, a user list), one or two new packages (GDebi for KDE is nice), and a couple of hundreds of minor upgrades that I don’t really need to know about, or that I get pleasantly surprised by when I open up old programs. I can go about my business, exactly as before (how’s that for productivity (wanker)), knowing that everything’s just that little bit more useful, safer, quicker.

Thanks to the debian devs, ubuntu devs, and everyone who made all the packages that make using linux such a joy!

Technorati Tags: , ,

Next Page »