Q&As: Parallelism and Advanced Compression




(Q) How can you manually set the degree of parallelism at object level?

  • ALTER TABLE sales PARALLEL 8;
  • You can set a fixed DOP at a table or index level


(Q) Which of the operations below can make use of parallel execution?

(1) When accessing objects: table scans, index fast full scans, partitioned index range scans
(2) Joins: nested loops, sort merges, hash, start transformations
(3) DDL staements: CTAS, Create Index, Rebuild Index, Rebuild Index Partition, Move/Split/Coalesce Partition
(4) DML statements
(5) Parallel Query
(6) Other SQL operations: group by, not in, select distinct, union, union all, cube, and rollup, aggregate and tables functions
(7) SQL*Loader i.e. $sqlldr CONTROL=load1.ctl DIRECT=true PARALLEL=true


(Q) In which type of objects parallel executions CANNOT be used?

Parallel DDL cannot be used on tables with object or LOB columns


(Q) How can you gather I/O Calibration statistics? How often should it be done?

  • Use DBMS_RESOURCE_MANAGER.CALIBRATE_IO procedure
  • I/O calibration is a one-time action if the physical hardware does not change.

Advanced Compression


(Q) What is Advanced Compression?

Introduced in 11g, includes compression for
  • structured data (numbers, chars)
  • Unstructured data (documents, images, etc)
  • backups (RMAN and Data Pump) and
  • Network transport (redo log transport during Data Guard gap resolution)



(Q) What are the benefits of Advanced Compression?

(a) storage reduction – compression of all types
(b) performance improvement – compressed blocks result in higher I/O throughput
(c) Memory efficiency – oracle keeps data compressed in memory
(d) backups – enhanced compression capabilities
(e) Data Guard – allows faster synchronization of databases during gap resolution process.



(Q) What improvement Advanced Compression brings to the table compression feature introducted in Oracle9i?

With Table compression feature – data could be compressed ONLY during bulk load operations
With Advanced CompressionDuring Inserts and Updates also. Also Compression and Deduplication of SecureFiles


(Q) Does table data in compressed tables get decompressed before it is read?

No. Oracle reads directly from compressed blocks in memory.


(Q) What features are included in the Advanced Compression option?

  • OLTP table compression – improved query performance with minimal write perf overhead
  • SecureFiles – SecureFiles compression for any unstructured content. Deduplication to reduce redundancy
  • RMAN – Multiple backup compression levels (faster --- better ratio)
  • Data Pump Compression – Exports can be coompressed
  • Data Guard – Can compress redo data (reduced network traffic, faster gap resolution)


(Q) What types of data compression can be done with RMAN (using Advanced Compression Option)

  • HiGH – Good for backups over slower networks
  • MEDIUM – Recommended for most environments. (about the same as regular compression)
  • LOW – Least effect on backup throughput


(Q) How to enable Advanced Compression option?

  • Set parameter enable_option_advanced_compression = TRUE
  • With Advanced compression option enabled, you can:
    • RMAN> CONFIGURE COMPRESSION ALGORITHM [HIGH|MEDIUM|LOW
  • V$RMAN_COMPRESSION_ALGORITHM describes supported algorithms
(Q) How can the various features under Advanced Compression be turned on?
For table Compression - Methods of Table compression on 11gR2:
Basic compression – direct path load only
  • i.e. CREATE/ALTER table … COMPRESS [BASIC] – Direct-path only
OLTP compression – DML operations
  • i.e. CREATE/ALTER table … COMPRESS FOR OLTP – Direct-path only
Warehouse compression (hybrid Columnar Compression) Online archival compression (hybrid columnar compression)
For SecureFiles -
i.e CREATE TABLE t1 (a CLOB) LOB(a) STORE AS SECUREFILE( COMPRESS LOW [MEDIUM|HIGH] DEDUPLICATE [KEEP_DUPLICATES] )
For RMAN -
RMAN> BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;
or
RMAN> CONFIGURE DEVICE TYPE [DISK | TAPE] BACKUP TYPE TO COMPRESSED BACKUPSET;
For Data Pump -
COMPRESSION = [ALL|DATA_ONLY|METADATA_ONLY|NONE]
  • ALL and DATA_ONLY requires ACO enabled.
  • i.e expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=DATA_ONLY

MongoDB with Python: a quick introduction (I)



Here are some basic steps for data manipulation in MongoDB using Python.

Download pymongo
pymongo is a native Python driver for MongoDB.
The PyMongo distribution contains tools for working with MongoDB.

(1) Installing PyMongo is very simple if you have setuptools installed. To install setuptools you need to:
(a) Download the egg file for your version of python: get it here.
(b) After downloaded, execute the egg as if it were an actual shell scipt:
$ sudo sh setuptools-0.6c11-py2.6.egg

(2) With setuptools installed, you can install pymongo using:
$ sudo easy_install pymongo
Searching for pymongo
Best match: pymongo 2.0.1
Processing pymongo-2.0.1-py2.6-linux-i686.egg
pymongo 2.0.1 is already the active version in easy-install.pth

Using /usr/local/lib/python2.6/dist-packages/pymongo-2.0.1-py2.6-linux-i686.egg
Processing dependencies for pymongo
Finished processing dependencies for pymongo

(b) or you can Install from source
$ git clone git://github.com/mongodb/mongo-python-driver.git pymongo
$ cd pymongo/
$ python setup.py install

To test whether the installation was successful, try to import pymongo package into python without raising an exception:
jdoe@lambda:$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) 
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 
>>> import pymongo
>>>

Connect to the MongoDB server and check that you're connected to the local host in the default port.
>>> from pymongo import Connection
>>> connection = Connection()            -- create a connection with the default server/port
>>> connection                           -- print connection details
Connection('localhost', 27017')

-- You can explicitly specify host and tcp port where the mongoDB service you want to connect is running.  
>>> connection = Connection('192.117.47.23', 20120)
>>>

Connect to a database
Once connected to the database server, you need to connect to a specific mongodb database.
>>> connection.database_names()       --- list the available databases in the server
[u'mynewdb', u'local', u'test']
>>>
>>> db = connection['mynewdb']        --- connects to 'mynewdb'
>>>  
>>> db.name                           --- list name of database you're connected to
u'mynewdb'
>>>

Access database collections
Collections can be thought as analogous to tables in relational databases. To see existing collections in the database:
>>> db.collection_names()           --- list existing collections
[u'mycollection', u'system.indexes', u'things', u'comments']
>>>
>>> things = db['things']
>>>
>>> things.name                     --- print collection name
u'things'
>>>
>>> things.database                 --- database that holds the collection
Database(Connection('localhost', 27017), u'mynewdb')
>>>
>>> things.count()                  --- get the number of existing documents in the collection
5


  • Manipulating data in MongoDB with CRUD operations: Create, Retrieve, Update, Delete
  • These are the atomic operations used to manipulate the data.
  • These are method calls equivalent to DML statments in relational databases (Insert, Select, Update, Delete).
  • Comparing data manipulating operations in a relational table and in a MongoDB collection:
Relational Database MongoDB
Table BLOG (author, post, tags, date) Collection BLOG (Columns not statically defined)
INSERT statement
SQL> INSERT into BLOG
Values ("joe", v_post, "MongoDB, Python", sysdate)
>>> post = { "author": "joe",
        "text": "Blogging about MongoDB",
        "tags": ["MongoDB", "Python"],
        "date": datetime.datetime.utcnow()}
>>> db.blog.insert(post)
SELECT statement
SQL> SELECT * from BLOG
Where author = "joe"
>>> db.blog.find({"Author": "joe"})
UPDATE statement
SQL> Update BLOG set tags = "MongoDB, Python"
     where author = "joe"
>>> db.blog.update({"author":"joe"},
        { "$set": ["MongoDB", "Python"]})
DELETE statement
SQL> DELETE from BLOG where author = "joe"
>>> db.blog.remove({"author":"joe"})

Creating a new collection
Databases and Collections in MongoDB are created only when the first data is inserted.
$ ipython
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) 
Type "copyright", "credits" or "license" for more information.
...
In [2]: import pymongo                  --- import pymongo package
In [3]: from pymongo import Connection
In [4]: from bson import ObjectId

In [5]: connection = Connection()
In [6]: connection
Out[6]: Connection('localhost', 27017)  --- connected to localhost, in the default TCP port
In [7]: connection.database_names()     --- list existing databases
Out[7]: [u'test', u'local']

In [8]: db = connection['blogdb']       --- connect to a new database. 
                                        --- It will be created when the first object is inserted.

In [9]: post = { "author": "John", 
...          "text": "Blogging about MongoDB"};

In [9]: db.posts.insert(post);                      --- The first insert creates the new collection 'posts'
Out[9]: ObjectId('...')
In [10]: db.collection_names()
[u'system.indexes', u'posts']


Note: Collections can also be organized in namespaces, defined using a dot notation. For example, you could create two collections named: book.info and book.authors.

Inserting a document in a collection
  • In MongoDB documents within a collection do not have all to have the same number and type of fields ("columns"). In other words, schemas in MongoDB are dynamic, and can vary within one collection.
  • PyMongo uses dictionary objects to represent JSON-style documents.
  • To add a new document to a collection, using ipython:
In [9]: post = { 
   ...:     'author': 'Joann',
   ...:     'text': 'Just finished reading the Book of Nights'}

In [10]: db.posts.insert(post)        --- Method call to create a new document (post)
Out[10]: ObjectId('4eb99ad5a9e15833b1000000')

In [17]: for post in db.posts.find():   --- listing all documents in the posts collection.
             post
   ....:     
   ....:     
Out[18]: 
{u'_id': ObjectId('4eb99ad5a9e15833b1000000'),
 u'author': u'Joann',
 u'text': u'Just finished reading the Book of Nights'}
  • Note that you don't need to specify the "_id" field when inserting a new document into a collection.
  • The document identifier is automatically generated by the database and is unique across the collection.
  • You can also execute bulk inserts:
In [13]: many_posts = [{'author': 'David',
   ....:                'text' : "David's Blog"},
   ....:               {'author': 'Monique',
   ....:                'text' : 'My photo blog'}]

In [14]: db.posts.insert(many_posts)
Out[14]: [ObjectId('4eb9bcada9e15809f3000000'), ObjectId('4eb9bcada9e15809f3000001')]

In [15]: for post in db.posts.find():
   ....:     post
   ....:     
   ....:     
Out[15]: 
{u'_id': ObjectId('4eb99ad5a9e15833b1000000'),
 u'author': u'Joann',
 u'text': u'Just finished reading the Book of Nights'}
Out[15]: 
{u'_id': ObjectId('4eb9bcada9e15809f3000000'),
 u'author': u'David',
 u'text': u"David's Blog"}
Out[15]: 
{u'_id': ObjectId('4eb9bcada9e15809f3000001'),
 u'author': u'Monique',
 u'text': u'My photo blog'}

Selecting (reading) documents inside collections
  • Data in MongoDB is represented by structures of key-value pairs, using JSON-style documents.
  • Let's query the collection "things" and ask for ONE document in that collection. Use the find_one() method.
>>> things.find_one()                             --- returns the first document in the collection
{u'_id': ObjectId('4eb787821b02fd09c403b219'), u'name': u'mongo'}

Here it returned a document containing two fields (key-value pairs): 
  "_id": ObjectId('4eb787821b02fd09c403b219')  --- (an identifier for the document), and 
  "name": 'mongo'                              --- a "column" "name" with its associated value, the string 'mongo'.

We can also define criteria for the query. For example,
(a) return one document with field "name" equal to "mongo"
>>> things.find_one({"name":"mongo"});
{u'_id': ObjectId('4eb787821b02fd09c403b219'), u'name': u'mongo'}
>>>

(b) return one document with field "name" equal to "book"
>>> things.find_one({"name":"book"});
{u'keywords': [u'NoSQL', u'MongoDB', u'PyMongo'], u'date': datetime.datetime(2011, 11, 7, 19, 47, 44, 722000), u'_id': ObjectId(',,,'), u'name': u'book', u'title': u'Mastering MongoDB'}

Note: The dynamic nature of the mongoDB database schemas can be seen in the results of the queries above. Here the collection "things" has two documents with different number of fields ("columns") and datatypes: 
 {"name":"mongo"}
 {"name":"book, "title": "Mastering MongoDB", "Keywords":["NoSQL", "MongoDB", "PyMongo"], "date": datetime.datetime(2011, 11, 7, 19, 47, 44, 722000)} 

Querying more than one document
A query returns a cursor pointing to all the documents that matched the query criteria.
To see these documents you need to iteract through the cursor elements:
>>> for thing in things.find():
...     thing
... 
{u'_id': ObjectId('...'), u'name': u'mongo'}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 1.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 2.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 3.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 4.0}
{u'keywords': [u'NoSQL', u'MongoDB', u'PyMongo'], u'date': datetime.datetime(...), u'_id': ObjectId('...'), u'name': u'book', u'title': u'Mastering MongoDB'}
{u'keywords': [u'programming', u'Python', u'MongoDB'], u'date': datetime.datetime(...), u'_id': ObjectId('...'), u'name': u'book', u'title': u'Python and MongoDB'}
{u'name': u'book', u'title': u'Python Notes', u'keywords': [u'programming', u'Python'], u'year': 2011, u'date': datetime.datetime(...), u'_id': ObjectId('4...')}

-- Alternatively, you can explicitly define a cursor variable: 
>>> cursor = things.find()
>>> for x in cursor:
...     x
... 
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 1.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 2.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 3.0}
{u'x': 4.0, u'_id': ObjectId('...'), u'j': 4.0}
{u'keywords': [u'NoSQL', u'MongoDB', u'PyMongo'], u'date': datetime.datetime(...), u'_id': ObjectId('...'), u'name': u'book', u'title': u'Mastering MongoDB'}
{u'keywords': [u'programming', u'Python', u'MongoDB'], u'date': datetime.datetime(...), u'_id': ObjectId('...'), u'name': u'book', u'title': u'Python and MongoDB'}
{u'name': u'book', u'title': u'Python Notes', u'keywords': [u'programming', u'Python'], u'year': 2011, u'date': datetime.datetime(...), u'_id': ObjectId('...')}
>>> 


You can also return only some of the document fields. (Similar to a SQL query that returns only a subset of the table columns).
>>> for thing in things.find({"name":"book"}, {"keywords": 1}):
...     thing
... 
{u'keywords': [u'NoSQL', u'MongoDB', u'PyMongo'], u'_id': ObjectId('...')}
{u'keywords': [u'programming', u'Python', u'MongoDB'], u'_id': ObjectId('...')}
{u'keywords': [u'programming', u'Python'], u'_id': ObjectId('...')}
>>> 

Updating documents in collections
  • MongoDB supports atomic updates in document fields as well as more traditional updates for replacing an entire document.
  • Use the update() method to entirely replace the document matching criteria with a new document.
  • If you want to modify only some attributes of a document, you need to use one of the $set modifier.
  • update() usually takes two parameters:
    • the first select the documents that will be updated (similar to the WHERE clause on SQL);
    • the second parameter contains the new values for the document attributes.


Example: insert a new document in the blog collection, and update the tag values.
(1) Insert a new document in the blog collection

>>>new_post = { "author": "Monique", 
...       "text": "Sharding in MongoDB",
...       "tags": ["MongoDB"],
...       "date": datetime.datetime.utcnow()};
>>>
>>> db.blog.insert(new_post)
ObjectId('...')
>>> 

(2) list documents in the collection
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(2011, 11, 7, 22, 10, 43, 77000), u'text': u'Blogging about MongoDB', u'_id': ObjectId('...'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
{u'date': datetime.datetime(2011, 11, 8, 1, 5, 32, 604000), u'text': u'Sharding in MongoDB', u'_id': ObjectId('...'), u'author': u'Monique', u'tags': [u'MongoDB']}
>>> 

Now, update the post where the author was Monique.
(1) substitute the document for an entirely new document
>>> db.blog.update({"author":"Monique"}, { "author": "Monique", "text": "Sharding in MongoDB", "tags": ["MongoDB", "scalability"], "date": datetime.datetime.utcnow()});
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(2011, 11, 7, 22, 10, 43, 77000), u'text': u'Blogging about MongoDB', u'_id': ObjectId('...'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
{u'date': datetime.datetime(2011, 11, 8, 1, 8, 43, 416000), u'text': u'Sharding in MongoDB', u'_id': ObjectId('...'), u'author': u'Monique', u'tags': [u'MongoDB', u'scalability']}
>>> 

Note that the previous update replaced the previous document entirely, even if all you needed to do was to add one new tag to the tags field of the document. If you call the update method and pass only the new values for the tags attribute, the resulting update will be incorrect:
>>> db.blog.update({"author":"Monique"}, { "tags": ["MongoDB", "scalability"]});
>>>
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(...), u'text': u'Blogging about MongoDB', u'_id': ObjectId('...'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
{u'_id': ObjectId('...'), u'tags': [u'MongoDB']}                  --- updated document
>>> 

(2) Another way to update only some fields of a document, is to use the $set update modifier.
  • The $set modifier works like the SET clause on an SQL Update statement, with which you can specify the columns that will be updated
>>> db.blog.update({"author":"Monique"}, { "$set": {"tags": ["MongoDB","Scalability"]}});
>>>
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(...), u'text': u'Blogging about MongoDB', u'_id': ObjectId('...'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
{u'date': datetime.datetime(...), u'text': u'Sharding in MongoDB', u'_id': ObjectId('...'), u'tags': [u'MongoDB', u'Scalability'], u'author': u'Monique'}
>>> 

(3) Since the "tags"field is an array, you can more efficiently use the $push update modifier.
  • $push appends value to field, if field is an existing array, otherwise sets field to the array [value] if field is not present.
>>> db.blog.update({"author":"Monique"}, { "$push": {"tags":"Python"}});
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(2011, 11, 7, 22, 10, 43, 77000), u'text': u'Blogging about MongoDB', u'_id': ObjectId('4eb857b3a9e158609c000004'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
{u'date': datetime.datetime(2011, 11, 8, 1, 5, 32, 604000), u'text': u'Sharding in MongoDB', u'_id': ObjectId('4eb88081a9e158609c000005'), u'tags': [u'MongoDB', u'Scalability', u'Python'], u'author': u'Monique'}
>>> 


Deleting documents from collections
To delete a document from a collection use the method remove, passing as parameter a document field that either (a) uniquely identifies the document you want to delete or (b) identifies the set of documents you want to delete.s
>>> db.blog.remove({"author":"Monique"})
>>> for post in db.blog.find():
...     post
... 
{u'date': datetime.datetime(2011, 11, 7, 22, 10, 43, 77000), u'text': u'Blogging about MongoDB', u'_id': ObjectId('4eb857b3a9e158609c000004'), u'author': u'John', u'tags': [u'MongoDB', u'NoSQL', u'Python']}
>>>


Installing MongoDB on Ubuntu


Recently I have been trying MongoDB and other NoSQL databases for a project.
So I decided to compile some of my notes on posts here.

MongoDB: Download, Install and Configuration

Installing MongoDB on Ubuntu
To install MongoDB on Ubuntu, you can use the packages made available by 10gen, following the steps below:

(1) add a line to your /etc/apt/sources.list
deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen

(2) Add the 10gen GPG key, or apt will disable the repository (apt uses encryption keys to verify if the repository is trusted and disables untrusted ones).
jdoe@quark:/etc/apt$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv 7F0CEB10
gpg: requesting key 7F0CEB10 from hkp server keyserver.ubuntu.com
gpg: key 7F0CEB10: public key "Richard Kreuter " imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
jdoe@quark:/etc/apt$ 

(3) To install the package, update the sources and then install:
$ sudo apt-get update
$ sudo apt-get install mongodb-10gen

(4) Create directory for datafiles and database logs.
MongoDB by default try to store datafiles in /data/db.
If the directory does not exist, the server will fail to start unless you explicitly assign a different, existing location for the datafiles.
For security reasons, make sure the directory is created as a non-root user.
(a) You can create the default directory: 
$ sudo mkdir -p  /data/db/
S sudo chown `id -u` /data/db

(b) you can choose to store datafiles somewhere else. If you choose this, make sure to specify the datafile location with the --dbpath option when starting the MongoDB server.
$ sudo mkdir -p  
S sudo chown `id -u` 

(5) You can test the installation, by calling the mongodb shell
jdoe@quark:~$ mongo
MongoDB shell version: 2.0.1
connecting to: test
> 

The installation creates the mongodb user and install files according to the default configuration below:
Installed architecture
  • binaries installed on /usr/bin
jdoe@quark:/usr/bin$ ls -l mongo*
-rwx... mongo            -- database shell
-rwx... mongod           -- mongodb daemon. This is the core database process
-rwx... mongodump        -- hotbackups. creates a binary representation of the entire database, collections or collection objects
-rwx... mongoexport      -- exports a collection to JSON or CSV
-rwx... mongofiles       -- tool for using GridFS, a mechanism for manipulating large files in MongoDB
-rwx... mongoimport      -- imports a JSON/CSV/TSV file into a MongoDB
-rwx... mongorestore     -- restores the output of mongodump
-rwx... mongos           -- sharding controller. Provides automatic load balancing and partitioning
-rwx... mongostat        -- show usage statistics (numbers and percentuals) on a running monodb instance 
-rwx... mongotop         -- provide read/write statistics on collections and namespaces in a mongodb instance
  • Configuration file installed on /etc/mongodb.conf
  • database files will be created in: dbpath=/var/lib/mongodb
  • log files will be created in : logpath=/var/log/mongodb/mongodb.log



Starting up and Stopping MongoDB
  • mongod is MongoDB core database process. It can be manually started to run in the foreground or as a daemon.
  • MongoDB is a database server: it runs in the foreground or background and waits for connections from the user.
  • There are a number of options with which mongod can be initialized.
  • The startup options fall into general, replication, master/slave, replica set and sharding categories. Some of the startup options are:
--port      num   TCP port which mongodb will use
--maxConns  num   max # of simultaneous connections
--logpath   path  log file path
--logappend       append instead of overwrite log file
--fork            fork server process (daemon)
--auth            authenticate users
--dbpath    path  directory for datafiles
--directoryperdb  each database will be stored in its own directory
--shutdown        shutdowns server

(a) start mongodb running in the foreground in a terminal. Data stored in /mongodb/data. mongodb uses default port 27017.
(You need to create the /mongodb/data first).
jdoe@quark:~$ mkdir -p /mongodb/data
jdoe@quark:~$ mongod --dbpath /mongodb/data  
...
Sun Nov  6 19:05:09 [initandlisten] options: { dbpath: "/mongodb/data" }
Sun Nov  6 19:05:09 [websvr] admin web console waiting for connections on port 28017
Sun Nov  6 19:05:09 [initandlisten] waiting for connections on port 27017
...

jdoe@quark:~$ ps -ef | grep mongo
jdoe   20519 16142  0 19:05 pts/1    00:00:10 mongod --dbpath /mongodb/data
jdoe   20566 20034  0 19:49 pts/2    00:00:00 grep mongo

jdoe@quark:~$ ls -l /mongodb/data
total 4
-rwxr-xr-x 1 jdoe jdoe 6 2011-11-06 19:05 mongod.lock

(b) start mongodb as a daemon, running on TCP port 20012. Data stored in /mongodb/data. Logs on /mongodb/log.
jdoe@quark:~$ mongod --fork --port 20012 --dbpath /mongodb/data/ --logpath /mongodb/logs/mongodblog --logappend 
forked process: 20655
jdoe@quark:~$ all output going to: /mongodb/logs/mongodblog

Alternatively, you can start/stop mongoDB by:
jdoe@quark:~$ sudo start mongodb
mongodb start/running, process 2824

jdoe@quark:~$ sudo stop mongodb
mongodb stop/waiting

Stopping MongoDB

  1. Contro-C will do it, if the server is running on the foreground. Mongo waits until all ongoing operations complete and then exits.
  2. Alternatively:
(a) call mongod with --shutdown option
jdoe@quark:~$ mongod --dbpath /mongodb/data --shutdown
killing process with pid: 20746
or
(b) use database shell (mongo)
(Here note the confusing output of the db.shutdownServer() call. Although the messages suggest failure, the database is shutdown as expected).
jdoe@quark:~$ mongo quark:20012
MongoDB shell version: 2.0.1
connecting to: quark:20012/test
> use admin
switched to db admin
> db.shutdownServer()
Sun Nov  6 20:23:59 DBClientCursor::init call() failed
Sun Nov  6 20:23:59 query failed : admin.$cmd { shutdown: 1.0 } to: quark:20012
server should be down...