Thursday, December 04, 2008

Adobe Flex and Flash Face detection library

I'm interested in developing a flex or flash library that could be used for face detection or augmented reality. I haven't found any on the web so far.
Just wondering how many of you are also looking for it.
[]'s

Wednesday, September 17, 2008

EHCI 0.5 is now ready for PyCon Brasil 2008

Well, it was about time :)
Ehci 0.5 has just been released and it now features Python bindings.

It's pretty easy to use EHCI in Python. The following snippet shows how to do it in 6 lines:

import ehci

ehci
.ehciInit()

while(1):
ehci
.ehciLoop(1,0)
x
,y,width,height = ehci.getHeadBounds()
print "Coord (",x,",",y,") width ",width,"height ",height

These two videos give some idea of EHCI integration with Panda3D:





To download it, check http://code.google.com/p/ehci

Friday, August 29, 2008

EHCI Final Report

Official EHCI project site

Well, it's the end of Google Summer of Code, and I need to say that it was great to be with Natural User Interface Group and with Google support.

As one of the last features that was missing was the ability to browse through an image with the hands, I'd like to post this video here:





The most recent updates since last blog post are the windows binaries, as well as new hand interaction demo.

Windows binaries 6 degrees of freedom head tracking download. This version was compiled without OpenMP support, so it's running way slower than the source one compiled with OpenMP support. It means that it won't work as fine as in Linux.
UPDATE: This new version supports OpenMP

From the updated planning, the features in red have been completed since it was re-planned: (features in blue have been removed from project planning)

1st Month:

Hand and Head tracking. 3D head tracking class. Small OpenGL demos.

2nd Month:

Body tracking, and gesture recognition classes. Zoom and rotation features. Documentation of classes through tutorials, code documentation and demos

3rd Month:

Motion flow and 3d model wireframe tracking classes. Documentation. Project packaging through Google Summer of Code and Natural User Interface sites.
Packaging in Natural User Interface site is supposed to happen in September 3th.

In the end, most of the initially planned features have been implemented and documented.

I'd like to thank:
Everyone from OpenCV project (for creating this amazing library)
Pawel Solyga, NUI (for being such a great mentor)
Thomás Cavichioli Dias, ITA (for teaching me how to use OpenCV as well as for giving me depth information on how to use and create cascade classifiers)
Juan Wachs, BGU, Israel (for creating hand detection cascade)
Stefano Fabri, Sapienza - Università di Roma (for all the interesting papers, articles and attention)
Roman Stanchak, OpenCV (for all the help with Swig and Python interfaces)
Len Van Der Westhuizen (for creating and releasing the head model used throughout the project)
Vincent Lepetit, Computer Vision laboratory, EPFL (for the great survey and advices)
Mike Nigh (for the Irrlicht work)
Jared Contrascere, Bowling Green (for the OpenCV/Ehci/Windows work)
my professors at Instituto Tecnológico de Aeronáutica (for all the knowledge taught)
Johnny Chung Lee, Carnegie Mellon (for his great ideas with Wii)
everyone else that I'm unfortunately forgetting, and
my girlfriend Kathy, family and friends (for supporting me through the project),
and, of course, God, Who has given me the strength, love and support to carry this project!

Wednesday, August 06, 2008

EHCI Updates - Version 0.4 has just been released

(check project site at http://code.google.com/p/ehci)
EHCI (Enhanced Human Computer Interface) now features packaging through a tarball. Installation is supposed to be as simple as configure, make, make install. Besides easier installation, the new version has several features, like:

- New features:
  • Hand detection/tracking: now users can interact with the computer using their hands (notice that no accessory besides an ordinary web cam is being used).



The result can also be seen in a noisier environment in this video.


  • Enhanced lightning model/more robust algorithm: this video shows the new lightning model, as well as the 6 degrees of freedom head tracking in a noisy environment.
- New API:
  • EHCI's new API focuses simple functions, so that developers can completely abstract the OpenCV layer. Example functions:
while(1){
ehciLoop(EHCI2DFACEDETECT,0);
getHeadBounds(&upperX,&upperY,&headWidth,&headHeight);
}
- Installation procedure:
  • Autotools based installation is now available: a simple ./configure ./make ./make install should be enough to get developers able to use EHCI library
  • A distribution tarball is easily downloadable from ehci project site

- Updated documentation:
  • new demos (simple2d and simple3d)
  • cleaned up code (boxView3d and 6dofhead have been cleaned up)
  • tutorials have been posted on project wiki
  • the project now features doxygen documentation

- Robust algorithms:
  • 6 degrees of freedom now considers up to 200 feature points to track, which provides better tracking
  • Enhancements to the algorithm have been researched and are on the way and documented
- Lightning model working:
  • 6 degrees of freedom sample now considers normals for accurate lighting
  • Blending functions as well as a single glut layer have been added
- Tagged versions:
  • SVN tag directory is being updated accordingly
- Python bindings:
  • Python bindings are on the way. SWIG is being researched, as well as some drafts have been developed.

Friday, July 11, 2008

ITA Latex Users Society - ITALUS

Queria deixar um link para o projeto de uso de latex no ITA. Aqui vai uma breve descrição e o link:

http://code.google.com/p/italus

Este projeto visa difundir o uso de latex em teses do Instituto Tecnológico de Aeronáutica, tanto para alunos da graduação como para alunos do mestrado e doutorado. O trabalho hospedado neste site é composto por templates para geração das teses nos formatos requisitados pela instituição.
Conta-se com o apoio de todos os usuários, tanto na requisição de novas funcionalidades, como para atender a uma nova especificação do ITA, ou mesmo para "codificar" estas alterações.

Tuesday, July 08, 2008

Adding keywords to eclipse rcp preferences lookup (search text field)

One might need to add more words to eclipse preference search bar while developing an RCP application. By default, it only looks up for the preference page title. In order to add new keywords, one needs to add a keyword reference and a keyword extension point.
Supposing the plugin.xml has the following sample page:

<extension point="org.eclipse.ui.preferencePages">
<page class="testercp.preferences.SamplePreferencePage" id="testercp.preferences.SamplePreferencePage" name="Sample Preferences">
</page>
</extension>

Add a keyword reference through:

<extension point="org.eclipse.ui.preferencePages">
<page class="testercp.preferences.SamplePreferencePage" id="testercp.preferences.SamplePreferencePage" name="Sample Preferences">
<keywordreference id="marte.keywords.preferences">
</keywordreference>
</page></extension>

And then, add the keyword extension point as:

<extension point="org.eclipse.ui.keywords">
<keyword id="marte.keywords.preferences" label="velocity stopping point">
</keyword></extension>

Now, if anyone types velocity, stopping, or point it will bring the SamplePreferencePage.

There's also a way to add these extensions through the wizards.

Monday, July 07, 2008

EHCI Upate - 6 degrees of freedom head tracking



I'm posting here some updates of the Google Summer of Code EHCI project. This part of the project deals with head tracking with 6 degrees of freedom, a problem often referred as finding the pose of an object. Since no light is being generated from the head - as in some types of infra-red tracking - it needs to rely on natural features of the head. This implementation tries to follow the excellent work from Luca Vacchetti, Vincent Lepetit, and Pascal Fua, from the Computer Vision Laboratory of the Swiss Federal Institute of Technology (EPFL), "Fusing Online and Offline Information for Stable 3D Tracking in Real-Time". The paper is available here


There's a video on youtube showing current progress.

Details

The algorithm starts automatically looking for a head in the image, through the famous Viola Jones algorithm.

After finding the head position, a feature tracking algorithm is started. It uses cvFindGoodFeatures to track in the region of interest defined by the head width and height. When these features are discovered, they are mapped back to a head model (I'm currently using a cylindrical model, but I plan to use the excellent head model by Len Van Der Westhuizen, which is available here, thanks Len!).

When the head model 3d points are known, as well as its corresponding 2d image points, DeMenthon's POSIT algorithm is used to find the initial pose estimation.

After that, an optical flow algorithm by Lucas-Kanade is used is used to track the points along the frames. These points are mapped back to original 3d points and the pose matrix is updated.

The source code shows how to deal with several important OpenCV functions, such as cvGoodFeaturesToTrack, cvCreatePOSITObject, cvPOSIT, and cvCalcOpticalFlowPyrLK, as well as some interesting OpenGL features like loading custum Model View, and Projection matrixes through glLoadMatrix.

I'd really like to thank God and everyone that has helped me develop this work with invaluable tutorials, papers, 3d models, and e-mails,

Links

Posit tutorial: http://opencvlibrary.sourceforge.net/Posit

Explanation of the raw format: http://local.wasp.uwa.edu.au/~pbourke/dataformats/povraw/


The full report is available at http://code.google.com/p/ehci/wiki/6dofhead

Friday, June 27, 2008

ITA Cell Research and the Cell Ecosystem

Well, it's nice to have an opportunity to show ITA's progress with STI Cell research. In June, 23rd, 2008 we had the pleasure to receive the IBMer Ph. D. Robert M. Szabo, from Cell Ecosystem Development Systems and Mr. Flavio Carazato, from University Relations of IBM Brazil.
A presentation with ITA research is available here. Besides showing our work we have also received important Cell related information, as accessing a QS20/QS22 (double floating precision!) cluster at GaTech. A full report of the visit is available in our wiki at http://code.google.com/p/ps3hacking/wiki/2008BobReport. Thanks to Robert Szabo, Flavio Carazato, and all researchers and professors from ITA.

Thursday, June 05, 2008

EHCI Updates

This week I've been working on hand tracking. I've read some important paper about the subject and I've installed softwares that deal with it. Both of them are reported at http://code.google.com/p/ehci/wiki/HandTracking . Besides that I've made some small videos (http://www.youtube.com/watch?v=o1WNb0g0f9Q and http://www.youtube.com/watch?v=Rmh-mZFxWns) showing the behaviour of "Flock of Features" and Viola-Jones Haar Cascades. Both of them yield very good results, each one with a different goal.

While using the Viola-Jones Cascades, I've applied Juan P. Wachs xml training file. This file's been trained with more than 1000 images so that the A gesture (a closed hand pointing upwards) could be effectively recognized. Thanks to Juan for making the file available. Besides that I'd also like to thank Dr. Vincent Lepetit for great directions to follow.

I'm now studying Viola-Jones haartraining in more detail to see if the same approach for generating the closed hand feature can be used for detecting open hands, as well as detection in other directions. I'm also willing to test some 3d model tracking.

Some other minor updates are my faculty's and NUI Group's logos appearing in the front page of the project.

When I get hand detection finished, at least for two closed hands, zooming and rotating features will be able to be implemented, which are the goals of the first month.

Saturday, May 31, 2008

EHCI Update

This blog entry is about an update to GSoC project Enhanced Human-Computer Interface.

Updates:
Videos:



Tuesday, May 27, 2008

Enabling gnome-terminal shortcut under compiz

Just in case the shortcut you've set up for opening your terminal is not working under compiz, try to do the following:

  • Open gconf-editor (in case you don't have it, look for it in your application manager (apt-get,synaptic,yum,etc))
  • Go to apps->compiz->general->allscreens->options and then, look for command_terminal and write 'gnome-terminal' (quotes for clarity)
  • You should be all set :)
I hope that helps,
be with God

Monday, May 12, 2008

Gnuplot in Action - Book review

A great reference about gnuplot is Philipp Janert's Gnuplot in Action. This book explains straight to the point gnuplot concepts, like ploting funtions, reading from a file, selecting columns for plot, exporting plots as images, and creating macros, right in the beginning.
Interesting features, like plotting data that's not sorted as well as multi-line records are also covered.
Smoothing a line with bezier from data obtained from a file "text.txt" is explained to be as simple as:

plot "test.txt" smooth bezier

This book also shows how to create logarithmic plots using gnuplot, with sidebar explanations included.
A crazy example showing how to plot a unix password files is presented, so that some string related data can be shown. Hot keys and mouse are also covered.

There's an entire chapter dedicated to plot styles. Errorbars are covered in the same chapter.

Another very interesting chapter is the one that deals with 3D plots through surface plots (splot) and contour functions.
A nice explanation about terminals as well as macros, scripting and batch operations is done at the end.

Overall, Gnuplot In Action is a depth reference about gnuplot and everyone who needs a deep understanding of this great tool should have this book for handy reference.

In order to evaluate the book, one can get the freely available sample chapter Essential Gnuplot.
One can also get the software through http://www.gnuplot.info/ or by using one's preferred package manager.

Wednesday, April 30, 2008

Generating keypresses on Linux

I was trying to simulate keypress events for some javascript based webpage - actually it was typing the whole alphabet - and I came up with the following code, using Xlib:

#include X11/extensions/XTest.h
#define XK_LATIN1
#define XK_MISCELLANY
#define XK_XKB_KEYS
#include X11/keysymdef.h
#include X11/Xlib.h
#include stdio.h
#include stdlib.h
#include sys/time.h

//notice that libraries are lacking < > because of html tags
int main(int argc, char **argv)
{
Display* pDisplay = XOpenDisplay( ":0.0" );

KeySym key[] = { XK_a,XK_b,XK_c,XK_d,XK_e,
XK_f,XK_g,XK_h,XK_i,XK_j,
XK_k,XK_l,XK_m,XK_n,XK_o,
XK_p,XK_q,XK_r,XK_s,XK_t,
XK_u,XK_v,XK_w,XK_x,XK_y,
XK_z};

system("sleep 4");
int i;
for( i = 0; i < 26; i++ )
{
XTestFakeKeyEvent ( pDisplay, XKeysymToKeycode( pDisplay, key[i] ),
True, 0 );
XTestFakeKeyEvent ( pDisplay, XKeysymToKeycode( pDisplay, key[i] ),
False, 0 );
}

if( pDisplay == NULL ) return 1;

XCloseDisplay(pDisplay);
return 0;
}

In order to compile it, just run:


gcc generateKeys.c -lX11 -lXtst


I hope it helps!

Monday, April 21, 2008

Fisl 9.0



Fiquei na arena de programação, mais especificamente verificando as novidades de desenvolvimento Open Source para Internet Tablets e celulares da Nokia.
O sistema operacional para os modelos 770, N800, N810 e N810 WiMax é o Maemo e o gerenciador de janelas é o Matchbox. O toolkit para GUI é o Hildon, também usado no Ubuntu Mobile (http://live.gnome.org/Hildon , https://stage.maemo.org/svn/maemo/projects/haf/doc/api/index.html). Alguns screenshots do Hildon podem ser vistos aqui (http://test.maemo.org/screenshots.html).
No primeiro dia, a plataforma de desenvolvimento foi o N95 (http://en.wikipedia.org/wiki/Nokia_N95), cujo sistema operacional é o Symbian OS que roda exclusivamente em processadores ARM. Dado que este sistema operacional é proprietário e sua implementação de C++ não é padrão, uma forma interessante de programar para o N95 é através de Python. Através do S60 (http://opensource.nokia.com/projects/pythonfors60/) pode-se fazer aplicações stand-alone e rápido desenvolvimento de protótipos. Um exemplo de aplicação que utilizava a câmera fotográfica para tirar uma foto e salvá-la no sistema de arquivos pôde ser feita em pouco mais de uma hora. Uma excelente referência com tutoriais para o S60 pode ser vista aqui (http://www.mobilenin.com/pys60/menu.htm). E aqui (http://www.mobilenin.com/pys60/resources/ex_camera_viewfinder.py) o código de uma aplicação para S60 que tira uma foto em 19 linhas.
Eis então que surge a pergunta, por que não desenvolver tudo em Java ME? Estas threads dão uma idéia (http://discussion.forum.nokia.com/forum/showthread.php?t=125743 , https://developer.symbian.com/forum/message.jspa?messageID=59978 ). Basicamente está ligado ao fato da virtual machine não disponibilizar algumas funções e à velocidade dos programas. Quando estas duas questões não são importantes, é muito provável que Java ME seja a melhor escolha, ainda mais visto que o C++ do Symbian não é o mesmo comumente disponível nos desktops.
Com relação ao desenvolvimento em Maemo, fica a dica do OpenBossa (http://www.openbossa.org/) com várias soluções interessantes combinando Python, Linux e embedded development.
Aqui (http://labs.vivi.eng.br/blog/?p=44 , http://labs.morpheuz.eng.br/blog/21/04/2008/fisl9-good-start/) há dois posts sobre a arena de programação do Fisl 9.0, explicando o que foi feito em cada dia. Também saiu um post bem engraçado no site do fisl: http://www.fisl.org.br/9.0/www/node/475 .
Fica aqui o meu grande abraço para todas as pessoas que conheci neste encontro, bem como um grande agradecimento pela oportunidade cedida pelo CCA e companhia dos amigos de trabalho :)

Sunday, April 06, 2008

Google SoC 2008 Work Schedule

1st Month:
* Hand tracking/gesture:
1st week: * Study and implement Viola-Jones http://research.microsoft.com/~viola/Pubs/Detect/violaJones_IJCV.pdf paper for hands
2nd week: * Study and implement Flock-of-features http://www.movesinstitute.org/~kolsch/handvu/KolschTurk2004Fast2DHandTrackingWithFlocksOfFeatures.pdf
3rd week: * Study and implement posture recognition and hand gestures http://www.movesinstitute.org/~kolsch/pubs/Dissertation_twoside.pdf
4th week: * Test features and integrate developed code in an easily accessible C++ class. Show zoom and rotate functionalities.

2nd Month:

* Head and body tracking
1st week: * Facade classes for OpenCV already implemented head and body tracking.
2nd week: * Study and implement head distance information.
3rd week: * Combine 2d head tracking and head distance, so that 3d head tracking is done.
4th week: * Integration tests and integrated classes. Deliver small OpenGL based demos and tutorials on how to use the framework.

3rd Month:
* Motion flow and augmented reality
1st week: * Create easy to access objects that react to motion flow, similar to the ones I've developed here: http://www.youtube.com/watch?v=QJvKT-NId9M
2nd week: * Study and implement 3d model tracking through wireframes http://www.bmva.ac.uk/bmvc/2000/papers/p66.pdf
3rd week: * Integrate developed research in easily accessible classes and write documentation.
4th week: * Time to develop side projects as packaging TouchLib for Linux or to use in case prior time wasn't enough for some features.

Sunday, March 30, 2008

Enhanced version of head tracking and openGl

This video shows my enhanced approach - using CV_HAAR_FIND_BIGGEST_OBJECT - to combine OpenCV head tracking with a 3d OpenGL environment so that the user will see objects from their head's point of view. Since this headtracking is 2d, no depth information has been obtained still, although I'm planning it for some next post. Now, it takes around 20 ms for the face recognition algorithm, which gives a nice refresh rate.

Thanks to Vadim Pisarevsky for the function.

Sunday, March 16, 2008

Simple PlayStation3 (PS3) HelloWorld program for cell, without makefiles

I've created a simple hello world showing how to call an SPE program from a main PPE one in the following url: http://code.google.com/p/ps3hacking/wiki/HelloWorld.
This simple program intends to show how a simple program like this can be compiled through the embedspu program.

Thursday, February 28, 2008

Wednesday, February 27, 2008

Realtime blue screen, background subtraction, OpenCV

Just trying out to remove colors near blue using OpenCV.
It's a simple approach: image filtering and classification through blue and red channels intensity.
Still doesn't look good when the object is far from the camera.
Trying to improve it.

Sunday, February 17, 2008

Installing gnome pilot for palm Z22

So, I was getting an error similar to this one, while installing palm on Fedora 8 (FC8), through the gpilot applet


Failed to connect using device 'cradle', on port 'usb:'.
Check your configuration, as you requested a new-style libusb 'usb:'
syncing, but have the old-style 'visor' kernel module loaded.
You may need to select a ttyUSB...' device."


Well, the kernel object module for palm must firstly be added.
The visor(http://en.wikipedia.org/wiki/Handspring_Visor) module seems to be readily available and works with palm devices.
In order to add the module, simply type:

su
modprobe visor


You can type dmesg and see a message similar to this one:

visor 6-1:1.0: Handspring Visor / Palm OS converter detected
usb 6-1: Handspring Visor / Palm OS converter now attached to ttyUSB0
usb 6-1: Handspring Visor / Palm OS converter now attached to ttyUSB1


It tells that /dev/ttyUSB0 or /dev/ttyUSB1 now links to palm. (In my case it was ttyUSB1).

Unfortunately this link is only available to the super user. In order to add it to ordinary users, one can do the following

su
updatedb
locate pilot.rules


The location of the pilot rules will be shown.
Then, just copy it to /etc/udev/rules.d/
In my case it was:

cp /usr/share/pilot-link/udev/60-pilot.rules /etc/udev/rules.d/


Now, edit the file /usr/share/pilot-link/udev/60-pilot.rules

Change
BUS=="usb", SYSFS{product}=="Palm Handheld*|Handspring*",KERNEL=="ttyUSB[13579]", SYMLINK="pilot", GROUP="uucp", MODE="0660"
to
BUS=="usb", SYSFS{product}=="Palm Handheld*|Handspring*",KERNEL=="ttyUSB[13579]", SYMLINK="pilot", GROUP="uucp", MODE="0777"

Non root users have access to palm, now :)

Configure using gnome's pilot applet:

Go to panel-> right click, then "Add to panel", then Pilot Applet...
Now you should configure (note that a symlink will be created from /dev/pilot to ttyUSB1

ls -all /dev/pilot
lrwxrwxrwx 1 root root 7 2008-02-17 14:06 /dev/pilot -> ttyUSB1
)

Configs should be like this:

type: USB
device: /dev/pilot
Speed (default): 57600
forward forward... click hotsync

Now install Evolution or some other software that will be able to display your palm data.
In order to synchronize contacts, memos, etc, just open the gpilot applet and choose 'Conduits'. There you can choose the calendar, contacts and some other options. As soon as you have Evolution installed, and press the HotSync button in the palm, everything will be synchronized.

Installing applications ( .prc files)

(first step is to pause gpilot Daemon - simply right click, "Pause Daemon", so that gpilot and install instructions don't get mixed)

In order to install applications, simply use pilot-xfer. For instance, to install Foo.prc, from the current directory, one shall use:


pilot-xfer -p /dev/pilot -i Foo.prc


And to remove it, use the same name as you get when you type

pilot-xfer -p /dev/pilot -l


Now remove it like:

pilot-xfer -p /dev/pilot --delete Foo


(in case Foo is the name you obtain with the -l command).


I hope that helps :)


*************
(this works only for one session... in order to add this command to every boot, one can do the following:

su
emacs /etc/rc.d/rc.local

Add this line to the file:

/sbin/modprobe visor

Saturday, February 16, 2008

Open source class scheduler

I was just wondering how useful would it be to have an open source class scheduler. I mean, create a GTK or Qt based scheduler using Hungarian Matching or some other Operational Research algorithm in order to schedule teachers and classes or other types of tasks.
I've seen commercial versions of around U$100.00, but I'm still not sure how much benefit it would be to the open source community.
I'm tracking this post's visits, so, the number of Google searches that lead here will be tracked and I'll decide if it's worth based on this number and comments to this post.
Thanks for your feedback

Monday, February 11, 2008

GPUWire - A GPU implementation of the livewire image segmentation algorithm

This video shows one of the results of my master thesis which is a solution to the Single Source Shortest Path problem through the use of a GPU.
The idea belongs to the field of GPGPU and my approach was using the GPU in order to parallely expand vertices just like the Delta-stepping algorithm.
Source code, as well as my master thesis can be found at http://code.google.com/p/gpuwire/.
The GPU API used was NVidia's CUDA.



I'm currently making small corrections to the thesis text as well as trying to clean up the code.
It's interesting to notice that the GUI used is Qt. I'll probably post an entry explaining how to glue CUDA and Qt.
If you're interested, please leave me a comment.

Thursday, January 24, 2008

WebCam + OpenGL + OpenCV head tracking = Immersive 3d environment

UPDATE! This sample has evolved to a project sponsored by Google. Please check http://code.google.com/p/ehci

This is my first attempt to create a 3d immersive environment using a simple webcam to track head's position.



This video shows my approach to combine OpenCV head tracking with a 3d OpenGL environment so that the user will see objects from their head's point of view. Since head tracking is 2d, no depth information is obtained. Besides that, it takes the face recognition algorithm around 200ms, which yields low fps (about 5). I'm currently trying to improve that :)

*******
Well, thanks to Walter Piechulla I could decrease detection time to around 20ms using the flag CV_HAAR_FIND_BIGGEST_OBJECT - which makes opencv detect only one face - in a parameter to the function cvHaarDetectObjects, that looks like this now:

cvHaarDetectObjects( small_img, cascade, storage, 1.1, 2, CV_HAAR_FIND_BIGGEST_OBJECT,
cvSize(30, 30) );

(by the way, I had to update OpenCV from CVS)
As soon as I get some time, I can make another video showing speedup :D
Thanks Piechulla

********
The new version is available here: http://danielbaggio.blogspot.com/2008/03/enhanced-version-of-head-tracking-and.html

Tuesday, January 15, 2008

Avoiding ACM Web Account to download papers

In case you are a researcher and you really want that paper but ACM web account is blocking you from seeing it, you might try the following:
  • Access the paper from your university - some universities have granted access because of their IP, which makes it very easy for researchers to download the papers
  • Get yourself an ACM Web Account
  • In case you are a researcher and you simply are not at the university when trying to see the paper, an useful workaround can be looking for the author's homepage or the congress that published the article
For instance, I was looking for the article "Early Experience with Scientific Programs on the Cray MTA-2" which leads me to this page, which doesn't allow me to get the file. Then, googling for the first author and a related word "Wendell Anderson cray mta 2" leads me to the Congress page, from where I can get it. It generally works :)
Besides that, ACM Web Account seems to be a great service. I would really recommend getting one account :)
Hope this post can help you!


************** this text can be useful for google indexing *******************
Full-Text is a controlled feature.
To access this feature:
  • Please login with your ACM Web Account.
  • Please review the requirements below.