mongodbWhat were you trying to solve when you created MongoDB?

We were and are trying to build the database that we always wanted as developers. For pure reporting, SQL and relational is nice, but when building data always wanted something different: something that made coding scaled horizontally.

hat was a major hurdle in the early days of MongoDB?

The big hurdle for the whole nosql space was that moving to anything from relational is a big step for the user. Relational is a great Everyone who graduates from school already knows it. However, computer architectures are changing, cloud computing is coming if not already here. We need solutions that run in fundamentally different environments. Also are interesting — thus the dynamic schema nature of the product.

Where is MongoDB going in the next 3 months? 6 months? 12 months?

We certainly believe there is a lot to do still and over years if not months. High on the roadmap are faster aggregation capabilities, full text search, better concurrency, and easy large cluster setup and administration. A general focus right now is assuring the product is suitable for mission critical production applications.

Is there anything you wish you had done differently with MongoDB?

I’m quite happy in hindsight with a lot of the design decisions made two or three years ago. We have been fortunate there. I like the data model a lot. I like that strong consistent operations are possible: there are many use cases, such as just registering a new user, where one would need that. So it’s more there is just a long long list of things we want to do that we haven’t done yet.

What makes Mongodb best?

Document-oriented

High performance

High availability

Easy scalability

Rich query language

If I am using replication, can some members use journaling and others not?

Yes

Can I use the journaling feature to perform safe hot backups?

Yes

What is 32 bit nuances?

There is extra memory mapped file activity with journaling. This will further constrain the limited db size of 32 bit builds. Thus, for now journaling by default is disabled on 32 bit systems.

Will the journal replay have problems if entries are incomplete (like the failure happened in the middle of one)?

Each journal (group) write is consistent and won’t be replayed during recovery unless it is complete.

What is role of Profiler in MongoDB?

MongoDB includes a database profiler which shows performance characteristics of each operation against the database. Using the profiler you can find queries (and write operations) which are slower than they should be; use this information, for example, to determine when an index is needed.

What’s a “namespace”?

MongoDB stores BSON objects in collections. The concatenation of the database name and the collection name (with a period in between) is called a namespace.

If you remove an object attribute is it deleted from the store?

Yes, you remove the attribute and then re-save() the object.

Are null values allowed?

For members of an object, yes. You cannot add null to a database collection though as null isn’t an object. You can add {}, though.

Does an update fsync to disk immediately?

No, writes to disk are lazy by default. A write may hit disk a couple of seconds later. For example, if the database receives a thousand increments to an object within one second, it will only be flushed to disk once. (Note fsync options are available though both at the command line and via getLastError_old.)

How do I do transactions/locking?

MongoDB does not use traditional locking or complex transactions with rollback, as it is designed to be lightweight and fast and predictable in its performance. It can be thought of as analogous to the MySQL MyISAM autocommit model. By keeping transaction support extremely simple, performance is enhanced, especially in a system that may run across many servers.

Why are my data files so large?

MongoDB does aggressive preallocation of reserved space to avoid file system fragmentation.

How long does replica set failover take?

It may take 10-30 seconds for the primary to be declared down by the other members and a new primary elected. During this window of time, the cluster is down for “primary” operations – that is, writes and strong consistent reads. However, you may execute eventually consistent queries to secondaries at any time (in slaveOk mode), including during this window.

What’s a master or primary?

This is a node/member which is currently the primary and processes all writes for the replica set. In a replica set, on a failover event, a different member can become primary.

What’s a secondary or slave?

A secondary is a node/member which applies operations from the current primary. This is done by tailing the replication oplog (local.oplog.rs).

Replication from primary to secondary is asynchronous, however the secondary will try to stay as close to current as possible (often this is just a few milliseconds on a LAN).

Do I have to call getLastError to make a write durable?

No. If you don’t call getLastError (aka “Safe Mode”) the server does exactly the same behavior as if you had. The getLastError call simply lets one get confirmation that the write operation was successfully committed. Of course, often you will want that confirmation, but the safety of the write and its durability is independent.

Should I start out with sharded or with a non-sharded MongoDB environment?

We suggest starting unsharded for simplicity and quick startup unless your initial data set will not fit on single servers. Upgrading to sharding from unsharded is easy and seamless, so there is not a lot of advantage to setting up sharding before your data set is large.

How does sharding work with replication?

Each shard is a logical collection of partitioned data. The shard could consist of a single server or a cluster of replicas. We recommmend using a replica set for each shard.

When will data be on more than one shard?

MongoDB sharding is range based. So all the objects in a collection get put into a chunk. Only when there is more than 1 chunk is there an option for multiple shards to get data. Right now, the default chunk size is 64mb, so you need at least 64mb for a migration to occur.

What happens if I try to update a document on a chunk that is being migrated?

The update will go through immediately on the old shard, and then the change will be replicated to the new shard before ownership transfers.

What if a shard is down or slow and I do a query?

If a shard is down, the query will return an error unless the “Partial” query options is set. If a shard is responding slowly, mongos will wait for it.

Can I remove old files in the moveChunk directory?

Yes, these files are made as backups during normal shard balancing operations. Once the operations are done then they can be deleted. The cleanup process is currently manual so please do take care of this to free up space.

How can I see the connections used by mongos?

db._adminCommand(“connPoolStats”);

If a moveChunk fails do I need to cleanup the partially moved docs?

No, chunk moves are consistent and deterministic; the move will retry and when completed the data will only be on the new shard.