user-avatar
Today is Tuesday
December 3, 2024

Category: Uncategorized

October 12, 2015

Spectrum of oppression

by viggy — Categories: social, Uncategorized — Tags: , 1 Comment

I recently came across this article where Arundathi Roy mentions how she sees Gender as a spectrum. It made me wonder about oppression in general and how oppression also happens over a spectrum. All the “isms” look at oppression in a very closed boundary and all the “ists” conveniently choose to ignore the oppression when it does not suit them. Communists for long have been criticized for ignoring the caste oppression and choosing to talk only about class oppressions. They themselves again blame feminists for being too narrow minded and not fighting to uplift the poor. Atheists again blame others to not look at how religion is being used to create unrest and divide amongst people and why feminists/communists are not fighting it directly.

Amongst all these is an act of oppression and it comes in various shades. Oppressor does not choose his style of oppression, he chooses whom to oppress and for what benefit/profit. Then why is it that all the “ists” choose their oppression to fight against. How is it that we choose to ignore one form of oppression on one human to other while we choose to fight another form of oppression.

Ofcourse choosing to fight atleast one form of oppression is better than becoming an oppressor ourselves. Because on the other side, you have people who justify one form of oppression by giving example of another form of oppression. Nature is definitely very harsh, cruel and merciless. It treats living beings as equal as dust but that no way justifies what we do to each other. Human society has evolved fighting the harshness of nature, right from warm clothes we wear to food we cultivate and store.

August 23, 2015

Running GeoWave from command line for HBase options

by viggy — Categories: Uncategorized — Tags: , , , , , , Leave a comment

GeoWave provides GeoWaveMain class which can be used to run GeoWave from command line. For regular users GeoWave provides a wrapper script for the same which would be installed in case you are using the rpm or geowave through other packages. In my case, since I was running in a development environment, I am directly using the GeoWaveMain class with options from the command line. Here I document the results that I got while running the hbase related commands from the GeoWaveMain class from command line. The GeoWave documentation talks about running the commands from command line here. I used the same and the existing tests in GeoWave source code to deduce the commands.

In the following examples, I am running a local hbase and zookeeper instance.

First, to run the commands, we need to generate the tools jar using the command in the deploy source directory,

mvn package -P geowave-tools-singlejar  -Dmaven.test.skip=true

I am skipping tests as some of the tests fail in my local setup due to various reason and I do not want that to be reason for the maven build to fail.

Next I also generate the hbase datastore jar using the following mvn build command

mvn package -P hbase-container-singlejar -Dmaven.test.skip=true -Dfindbugs.onlyAnalyze=true

Here I am also asking FindBugs just to analyze as currently FindBugs fails the build due to the autogenerated code in FilterProtos, which is used for Filters in HBase. I have not yet been able to find a solution to fix the same and general solution was to skip findbugs for the build process using this switch. This would still generate the FindBugs.xml which can be analysed later using FindBugs GUI separately.

The last two command should generate geowave-deploy-0.8.8-SNAPSHOT-tools.jar and geowave-deploy-0.8.8-SNAPSHOT-hbase-singlejar.jar in the target directory. Since we also use geotools-vector and gpx as formats in following commands, we need to copy their jar also here or refer to them in the classpath. In our case, we just copy the jar to the target directory.

Assuming that user’s current working directory is target, following command is used to run localhbaseingest:

java -cp geowave-deploy-0.8.8-SNAPSHOT-tools.jar:geowave-deploy-0.8.8-SNAPSHOT-hbase-singlejar.jar:geowave-format-vector-0.8.8-SNAPSHOT.jar mil.nga.giat.geowave.core.cli.GeoWaveMain -localhbaseingest -z localhost:2181 -n geowave -f geotools-vector -b ../../test/data/hail_test_case/hail-box-temporal-filter.shp

To get hdfshbaseingest to work, you need to start Yarn from your hadoop installation. In this, we also use “gpx” format rather than “geotools-vector”(I need to understand why?). Hence we use the jar of gpx format and copy the same in our current working directory. Also this command expects that the directory gpx is already exists in HDFS file system. To create this, you can run the following command from your hadoop installation:

<Hadoop-Installation-Dir>/bin/hadoop fs -mkdir /gpx

Then we run the following command:

java -cp geowave-deploy-0.8.8-SNAPSHOT-tools.jar:geowave-deploy-0.8.8-SNAPSHOT-hbase-singlejar.jar:geowave-format-gpx-0.8.8-SNAPSHOT.jar: mil.nga.giat.geowave.core.cli.GeoWaveMain -hdfshbaseingest -z localhost:2181 -n geowave -f gpx -b ../../test/data/hail_test_case/hail-box-temporal-filter.shp -hdfs localhost:9000 -hdfsbase “/” -jobtracker “localhost:8032”

-hdfs is the hostname:port of the hadoop installation

-hdfsbase is the parent directory in which we want to ingest

-jobtracker is the hostname:port of the yarn installation.

Currently for hdfshbasestage, I am getting the following error which need to be fixed:

➜  target git:(GEOWAVE-406) ✗ java -cp geowave-deploy-0.8.8-SNAPSHOT-tools.jar:geowave-deploy-0.8.8-SNAPSHOT-hbase-singlejar.jar:geowave-format-gpx-0.8.8-SNAPSHOT-tools.jar:. mil.nga.giat.geowave.core.cli.GeoWaveMain -hdfshbasestage -b ~/workspace/geowave/extensions/formats/gpx/src/test/resources/ -hdfs localhost:9000 -hdfsbase “/gpx/” -f gpx
2015-08-23 04:41:27,085 FATAL [main] ingest.AbstractIngestHBaseCommandLineDriver (AbstractIngestHBaseCommandLineDriver.java:applyArguments(146)) – Error parsing plugins
java.lang.IllegalArgumentException: Unable to find SPI plugin provider for ingest format ‘gpx’
at mil.nga.giat.geowave.core.ingest.AbstractIngestHBaseCommandLineDriver.getPluginProviders(AbstractIngestHBaseCommandLineDriver.java:196)
at mil.nga.giat.geowave.core.ingest.AbstractIngestHBaseCommandLineDriver.applyArguments(AbstractIngestHBaseCommandLineDriver.java:141)
at mil.nga.giat.geowave.core.ingest.AbstractIngestHBaseCommandLineDriver.run(AbstractIngestHBaseCommandLineDriver.java:75)
at mil.nga.giat.geowave.core.cli.GeoWaveMain.main(GeoWaveMain.java:48)

Similarly for posthbasestage I am getting the same error:

➜  target git:(GEOWAVE-406) ✗ java -cp geowave-deploy-0.8.8-SNAPSHOT-tools.jar:geowave-deploy-0.8.8-SNAPSHOT-hbase-singlejar.jar:geowave-format-gpx-0.8.8-SNAPSHOT-tools.jar:. mil.nga.giat.geowave.core.cli.GeoWaveMain -posthbasestage -hdfs localhost:9000 -hdfsbase “/gpx/” -f gpx  -z localhost:2181 -n geowave -jobtracker “localhost:8032”
2015-08-23 04:44:16,261 FATAL [main] ingest.AbstractIngestHBaseCommandLineDriver (AbstractIngestHBaseCommandLineDriver.java:applyArguments(146)) – Error parsing plugins
java.lang.IllegalArgumentException: Unable to find SPI plugin provider for ingest format ‘gpx’
at mil.nga.giat.geowave.core.ingest.AbstractIngestHBaseCommandLineDriver.getPluginProviders(AbstractIngestHBaseCommandLineDriver.java:196)
at mil.nga.giat.geowave.core.ingest.AbstractIngestHBaseCommandLineDriver.applyArguments(AbstractIngestHBaseCommandLineDriver.java:141)
at mil.nga.giat.geowave.core.ingest.AbstractIngestHBaseCommandLineDriver.run(AbstractIngestHBaseCommandLineDriver.java:75)
at mil.nga.giat.geowave.core.cli.GeoWaveMain.main(GeoWaveMain.java:48)

August 23, 2015

Using Google’s Protocol Buffer library to write GeoWave Filters for HBase datastore

by viggy — Categories: Uncategorized — Tags: , , , , Leave a comment

Accumulo provides Iterators which can be run on Tablet Servers as Filters during Scan. GeoWave uses this in form of local Client Filters and Distributable Filters which run on Tablet Servers when any scan is performed. As part of adding support for HBase, I needed to implement these filters in HBase. I have currently implemented two Filters, SingleEntryFilter and CqlHBaseQueryFilter which are counterparts for SingleEntryFilterIterator and CqlQueryFilterIterator in Accumulo.

Hbase makes use of Google’s Protocol Buffer to serialize data on client side and send it across to tablet servers. In this blog, I explain how I used the protobuf-java library to write the SingleEntryFilter for HBase in GeoWave.

Protobuf auto generates part of the code by using .proto file and its own code generator. The .proto file needs to contain the information about the arguments your class accepts in its constructor, which package the class needs to be generated in, etc. The arguments that class supports needs to be serializable. Since in our case, we are migrating from iterators, all the needed data are expected to be serializable as they need to be serialized even in case of iterators for Accumulo. I created a ‘protobuf’ directory inside local source directory ‘extensions/datastores/hbase/src/main/’ in GeoWave source code and in that created the following SingleEntryFilters.proto file.

 

option java_package = “mil.nga.giat.geowave.datastore.hbase.query.generated”;
option java_outer_classname = “FilterProtos”;
option java_generic_services = true;
option java_generate_equals_and_hash = true;
option optimize_for = SPEED;

message SingleEntryFilter {
required bytes adapterId = 1;
required bytes dataId = 2;
}

 

Now to generate the classes using protobuf, we need to install protobuf compiler on the machine. You can download the compiler from here.The README.txt given along with the compiler is quite explanatory for installing it.

After successful installation, by default the protoc compiler executable would be in src directory. Go to the source directory in which you want the generated package to be added. In my case, it was in <geowave-src-directory>/extensions/datastores/hbase/src/main/ .

Now, you can the following command.

<path-to-protoc-installation-dir>/src/protoc -I=. –java_out=java/ protobuf/SingleEntryFilters.proto

protobuf/SingleEntryFilter.proto is the path to the .proto file from your current directory.

 

This generated the necessary FilterProtos class. Now we need to create the SingleEntryFilter class. We use the FilterBase class provided by HBase to create new custom classes. Lars George’s book, HBase: The Definitive Guide  explains developing custom Filters for hbase and I used the example shared in the github repo for the book to develop Custom Filter for developing SingleEntryFilter.

It was only through that example that I came to know that I need to also implement toByteArray and parseFrom methods in the Custom Filter. Later I also found in HBase log that parseFrom method generates a DeserializationException which informs user about extending it in derived class.

July 29, 2015

It is not the person, it is the act which needs to be condemned

by viggy — Categories: Uncategorized — Tags: , , , , , , , , Leave a comment

Thousands in India die everyday. Some in their comfortable home, peacefully. Some due to hunger and some due to cold. Some are killed in the road accidents or murdered for petty reasons like money or love. Some are also murdered for their caste or religion in the name of God. Some sadly kill themselves for bad marks, bad love or like the many farmers, just due to bad rains. Each of this death is sad. Each of it needs to be avoided. When each of it happens, we need to hang our heads in shame for not being able to avoid it. Better health care, better support system, better education and whatever is possible by the human kind. What else do we strive to achieve?

All this comes from the recognition of the value that life has, the power it yields to any human and the limitless possibilities that a human is capable of when he/she is filled with life. Often, I have joked that I live to eat and yes, I am a foodie and good food is a great inspiration for me to live. But it is just one inspiration and there are hundreds of other reasons to live your life fully. Places to visit, people to meet, adventures to live through and what not.

Between all this, how can we accept a planned murder, even if that murder is of the most heinous person in the world. The person here is just one actor in the whole circumstance. His/her life ends there and there is nothing more to it. Buried/burnt that body feels nothing. What is more important is the society this act leaves behind. What happens to the society? Should the society celebrate this planned murder? Does the society become more violent after this act or does it become more peaceful. What is the emotion that such an act generate? Does it unify the society or does it instigate people to do more such heinous acts? The killed person is now relieved of impact of any of this and it is the society which now has to come to term with this act.

Yakub Menon or Ajmal Kasab, each individual while committing their crimes had a choice to not do it. The results of their act was very deplorable, very heinous, very cruel. They made their choice mainly because they could not accept the vastness of humanity. Society while executing the person also has a choice to make. There is no glory in killing another human being, especially in cold blooded, planned execution. It tells us that we as humans have given up on the fact that the person can change, that we have nothing more to reason with him/her, that in the vastness of the humanity, we cannot find a single place for that particular life to fit in. I cannot agree to this hopelessness in humanity.

Following is an intriguing para from the George Orwell’s essay on Hanging:

It is curious, but till that moment I had never realized what it means to destroy a healthy, conscious man. When I saw the prisoner step aside to avoid the puddle, I saw the mystery, the unspeakable wrongness, of cutting a life short when it is in full tide. This man was not dying, he was alive just as we were alive. All the organs of his body were working — bowels digesting food, skin renewing itself, nails growing, tissues forming — all toiling away in solemn foolery. His nails would still be growing when he stood on the drop, when he was falling through the air with a tenth of a second to live. His eyes saw the yellow gravel and the grey walls, and his brain still remembered, foresaw, reasoned — reasoned even about puddles. He and we were a party of men walking together, seeing, hearing, feeling, understanding the same world; and in two minutes, with a sudden snap, one of us would be gone — one mind less, one world less.

 

June 10, 2015

Simulation run time graphs of shadow-tor run with modified algorithm and default algorithm

by viggy — Categories: Uncategorized — Tags: , , , , Leave a comment

Following are the two graphs that we obtained for the modified algorithm and default algorithm in case of 50 relays- 180 clients and 100 relays-375 clients scenario. Interestingly, our modified algorithm performs linearly where as we see that for default algorithm, shadow almost stops execution for long intervals.

Graph for 50 relays and 180 clients.
simulation-run-time-50r-180c

Graph for 100 relays and 375 clients.

simulation-run-time-for100r-375c

May 5, 2015

When government is the oppressor, corruption is the way of living

by viggy — Categories: social, Uncategorized — Tags: , , , , , Leave a comment

As usual, I took a KSRTC bus to college today. As usual, I bought the ticket from the conductor and as usual he wrote the change that he owed me on the back of the ticket. After one stop from where I had boarded, the Ticket Checkers boarded the bus. These are people who do kind of surprise visit to the bus and check if the conductor has given tickets to all passengers and if there are passengers who are travelling without buying a ticket.
Generally, passengers and bus conductors have aversion towards each other. I used to think that this may be because the conductor doesnt give change properly, or may be rude in the way he/she talks to passengers. Conductor also doesnt sometime stop the buses on the stop. I had always thought that it was due to these daily friction that passengers dont like the conductor.
Today as usual, the Ticket Checkers checked all the tickets of the passengers. The conductor was very prompt and had given tickets to all the passengers. Hence the Ticket Checkers did not have any complain against him and hence he did not have to bear any penalty. Usually if any person is found travelling without tickets, the conductor has to pay the penalty of Rs. 300 from what I know.
After the Ticket Checkers left, conductor like a student who had just passed his exam, sighed. Some of the passengers started talking amongst themselves and with him, saying that Ticket Checkers were very disappointed, they did not get any reasons for penalty and during the discussion, I could see amazing solidarity of the passengers with the Conductor. He also was talking to them nicely and passengers were abusing the Ticket Checkers as people who always want to loot, etc.
This is when I realized that the reason why there is aversion between passengers and conductor is not because of bad behaviour or something. It is because for the passengers, he is the face of the oppressor called government. Everytime there is a hike in ticket prices, everytime they have to travel in fully packed buses, they blame the government and they see the conductor as the person representing government.
In that moment when Ticket Checkers came to check ticket, Conductor also became a part of the oppressed and hence they were easily able to show solidarity with him. This solidarity takes the form of corruption, when the government employee, to a large extent oppressed himself joins hands with the general public.
People blame the traffic police who catch them for violations as they see them as the face of the government. However the same police may becomes a friend when he gets the job done going against the rules as there is also on the side against the government.
Say politicians who earned crores of ruppee in Coal Scam may be corrupt but because they get the work done through the government for the local people, their corrupt acts are completely ignored because for people these are people just cheating the oppressor.

April 8, 2015

Setting up Geowave and integrating it with GeoServer for the development environment

Geowave is a library used to store, index and analyze geospatial data on top of Accumulo which is a free software implementation of Google’s Big Table. Accumulo in turn makes use of Zookeeper to handle distributed synchronization and uses Hadoop FileSystem for distributed and scalable storage of the data. Geowave decomposes multi-dimensional data to single dimensional data using a transformation called as Space Filling Curves. GeoServer is a java based server which provides a platform to view and edit geospatial data. Hence at an abstract level, we can summarize that the data which is transformed by goewave and stored(ingested) in Accumulo can be extracted and viewed from geoserver.
Following article will explain how we can setup the system to have Geowave and Geoserver working with Accumulo for a development environment.

Part 1: (more…)

March 13, 2015

Bigtable: snippets/notes from the original Google’s paper

by viggy — Categories: Uncategorized — Tags: , , , , , Leave a comment

Following are the lines that I made marked in the original Google’s paper on BigTable.

Introduction:

Bigtable has achieved several goals: wide applicability, scalability, high performance and high availability.
Bigtable does not suport a full relational data model; instead, it provides clients with a simple data that supports dynamic control over data layout and format, and allows clients to reason about the locality properties of the data represented in the underlying storage. Data is indexed using row and coloumn names that can be arbitary strings. Bigtable also treats data as uninterpreted strings, although clients often serialize various forms of structured and semi-structured data into these strings.

Data Model:
A Bigtable is a sparse, distributed, persistent multi-dimensional sorted map. The map is indexed by a row key, column key, and a timestamp; each value in the map is an uninterpreted array of bytes.
(row:string, column: string, time:int64) -> string

Row:
The row keys in a table are arbitary strings(currently up to 64KB in size, although 10-100 bytes is a typical size for most of our users). Every read or write of data under a single row key is atomic(regardless of the number of different columns being read or written in the row).
Bigtable maintains data in lexicographic order by row key. The row range for a table is dynamically partitioned. Each row range is called a tablet, which is the unit of distribution and load balancing. As a result, reads of short row ranges are efficient and typically require communication with only a small number of machines.

Columns:
Column keys are grouped into sets called column families, which form the basic unit of access control. All data stored in a column family is usually of the same type( we compress data in the same column family together). A column family must be created before data can be stored under any column key in that family: after a family has been created, any column key within the family can be used. It is our intent that the number of distinct column families in a table be small(in the hundreds at most), and that families rarely change during operation. In contrast a table may have an unbounded number of columns.
A column key is named using the following syntax: family:qualifier. Column family names must be printable, but qualifiers may be arbitary strings. Access Control and both disk and memory accounting are performed at the column-family level.

Timestamps:
Each cell in a Bigtable can contain multiple versions of the same data; these versions are indexed by timestamp, Bigtable timestamps are 64-bit integers.
The client can specify either that only the last n versions of a cell be kept, or that only new enough versions be kept.

API:

Bigtable suports single-row transactions, which can be used to perform atomic read-modify-write sequences on data stored under a single row key. Bigtable can eb used with MapReduce , a framework for running large scale parallel computations developed at Google.

Building Blocks:

Bigtable uses the distributed Google File System(GFS) to store log and data files.  Bigtable depends on a cluster management system for scheduling jobs, managing resources on shared machines, dealing with machine failures, and monitoring machine status.

The Google SSTable file format is used internally to store Bigtable. An SSTable provides a persistent, ordered immutable map from keys to values, where both keys and values are arbitary byte strings. Operations are provided to look up the value associated with a specified key, and to iterate over all key/value pairs in a specified key range. Internally, each SSTable contains a sequence of blocks (typically each block is 64KB in size, but this is configurable). A block index (stored at the end of the SSTable) is used to locate blocks; the index is loaded into memory when the SSTable is opened. A lookup can be performed with a single disk seek: we first find the appropriate block by performing a binary search in the in-memory index, and then reading the appropriate block from disk. Optionally, an SSTable can be com-
pletely mapped into memory, which allows us to perform lookups and scans without touching disk.

Bigtable relies on a highly-available and persistent distributed lock service called Chubby. A Chubby service consists of five active replicas, one of which is elected to be the master and actively serve requests. Chubby uses the Paxos algorithm to keep its replicas consistent in the face of failure. Chubby provides a namespace that consists of directories and small files. Each directory or file can be used as a lock, and reads and writes to a file are atomic. Each Chubby client maintains a session with a Chubby service. A client’s session expires if it is unable to renew its session lease within the lease expiration time. When a client’s session expires, it loses any locks and open handles. Chubby clients can also register callbacks on Chubby files and directories for notification of changes or session expiration.

Implementation:

The Bigtable implementation has three major components: a library that is linked into every client, one master server, and many tablet servers. Tablet servers can be dynamically added (or removed) from a cluster to accomodate changes in workloads.
Because Bigtable clients do not rely on the master for tablet location information, most clients never communicate with the master. As a result, the master is lightly loaded in practice.

Tablet Location:
We use a three-level hierarchy analogous to that of a B+ tree to store tablet location information.
With a modest limit of 128 MB METADATA tablets, our three-level location scheme is sufficient to address 2^34 tablets.
The client library caches tablet locations. If the client does not know the location of a tablet, or if it discovers that cached location information is incorrect, then it recursively moves up the tablet location hierarchy. If the client’s cache is empty, the location algorithm requires three network round-trips, including one read from Chubby. If the client’s cache is stale, the location algorithm could take up to six round-trips, because stale cache entries are only discovered upon misses.

<Need to go through Tablet Location, Tablet Assignment and Tablet Serving again>

Compactions:
When the memtable size reaches a threshold, the memtable is frozen, a new memtable is created, and the frozen memtable is converted to an SSTable and written to GFS. This minor compaction process has two goals: it shrinks the memory usage of the tablet server, and it reduces the amount of data that has to be read from the commit log during recovery if this server dies.
Every minor compaction creates a new SSTable.
We bound the number of such files by periodically executing a merging compaction in the background. A merging compaction reads the contents of a few SSTables and the memtable, and writes out a new SSTable. The input SSTables and memtable can be discarded as soon as the compaction has finished.
A merging compaction that rewrites all SSTables into exactly one SSTable is called a major compaction. A major compaction, produces an SSTable that contains no deletion information or deleted data.

Refinements:
Locality groups:

Clients can group multiple column families together into a locality group. A separate SSTable is generated for each locality group in each tablet.

Compression:

The user-specified compression format is applied to each SSTable block. Many clients use a two-pass custom compression scheme. The first pass uses Bentley and McIlroy’s scheme, which compresses long common strings across a large window. The second pass uses a fast compression algorithm that looks for repetitions in a small 16 KB window of the data.

Caching for read performance:

To improve read performance, tablet servers use two levels of caching. The Scan Cache is a higher-level cache that caches the key-value pairs returned by the SSTable interface to the tablet server code. The Block Cache is a lower-level cache that caches SSTables blocks that were read from GFS. The Scan Cache is most useful for applications that tend to read the same data repeatedly. The Block Cache is useful for applications that tend to read data that is close to the data they recently read.

Bloom Filters:

A read operation has to read from all SSTables that make up the state of a tablet. If these SSTables are not in memory, we may end up doing many disk accesses. We reduce the number of accesses by allowing clients to specify that Bloom filters should be created for SSTables in a particular locality group. A Bloom filter allows us to ask whether an SSTable might contain any data for a specified row/column pair.

Exploiting immutability:

Besides the SSTable caches, various other parts of the Bigtable system have been simplified by the fact that all of the SSTables that we generate are immutable.
The only mutable data structure that is accessed by both reads and writes is the memtable. To reduce contention during reads of the memtable, we make each memtable row copy-on-write and allow reads and writes to proceed in parallel.
Since SSTables are immutable, the problem of permanently removing deleted data is transformed to garbage collecting obsolete SSTables. Each tablet’s SSTables are registered in the METADATA table. The master removes obsolete SSTables as a mark-and-sweep garbage collection over the set of SSTables, where the METADATA table contains the set of roots.

January 8, 2015

“No results found” Issue in Search Code

by viggy — Categories: Uncategorized — Tags: , , 1 Comment

SearchCode is an amazing tool to find code snippets in existing projects and learn how others are using the particular code snippet. However this is the issue I found in its search. When I wanted to go to Second page of the search, it threw me “No results found” even though the first page did show that there are more results than that is displayed in first page. In my case, it told me that there are 38 results but showed only 14 in first page. Here are the screenshots. This is the link of the error.

 

Page1aOfSearchCode Page1bOfSearchCode Page2OfSearchCode

November 30, 2014

Whats wrong with Government snooping/surveillance?

by viggy — Categories: Uncategorized — Tags: , , , , , Leave a comment

A member in ILugC had recently asked this question and I spent some time trying to put up these points as an answer.

1) Can we trust the government itself?
“Quis custodiet ipsos custodes?” which translates to “Who will guard the
guards themselves?”

Government itself is made up of people, even though, we can proudly say
that we democratically elected the government, can we trust the
government completely? The various scams of state governments and
central governments give us a good reason why we cant trust those
running the government. Also, a government in itself is transient, you
never know what kind of future government can come.

2) Security of the data collected

Its a common knowledge that when any riot happens, the organizers easily
target people from specific communities thanks to data such as Voter’s
ID Card list etc available for that region. Hence even we can blindly
trust the government, the fact that such a collated data exists, itself
makes it a honeypot to attract bad elements in society to misuse the data.

3) Mass Surveillance vs targetted Surveillance

Though there may be some strong reasons for government to do
surveillance on specific individuals, there is very little rational to
do mass surveillance. The very notion that everything you do or say is
getting recorded will curtail freedom of expression. Everytime someone
wants to express something little off the beat, he/she might have to
take extreme caution and in many cases, people might not come forward at
all to express.
An analogy is imagine a light tower in central yard in a jail where
prisoners cant see who is watching from the light tower. Since they know
that there is 24 hours watch on it, the notion that someone there is
watching itself creates fear in them to try anything to escape.

Even in case of targetted surveillance, we know how Gujrat government
‘allegedly’ misused the state machinery to snoop on an individual. So
even having means to do targetted surveillance needs to have very
stringent checks and balances.

4) Eternity of the data collected

The idea that the data collected will exist years later is also
something to worry about. A simple example is what happened with Sec
377. When the high court declared it unconstitutional in 2009, many
people came forward expressing their true orientation. However when
Supreme Court overturned the HC decision, it left all the people who had
come forward openly to be sitting ducks for harassment by the authorities.

5) Option to opt-out

Like in case of Aadhar which looks like will stay in India now that the
new government also seem to be comfortable with it, there is little
option for those who value their privacy to opt-out of these schemes.

In India, since government does provide for lot of amenities, Maria
Xynou who is working on the Surveillance in India in CIS-India had
mentioned that there is a attitude amongst us to look at government like
a parent who takes care of us. Hence this attitude makes us put lot of
trust in the government which is dangerous.

You can watch her talk at CCC on Indian Surveillance State here,