An important tool in information analysis is dimensionality reduction. There are various approaches for large data simplification by scaling its dimensions down that play an important role in recognition and classification tasks. The efficiency of such dimension reduction tools is measured in terms of memory and the computational complexity, which is usually a function of the number of the given data points. Sparse local operators that involve substantially less than quadratic complexity at one end, and faithful multi-scale models with quadratic cost at the other end, make the design of dimensions-reduction tools a delicate balance between modeling accuracy and efficiency. Here, we combine the benefits of both and propose a low-dimensional multi-scale modeling of the data, at a modest computational cost. The idea is to project the classical multi-dimensional scaling problem into the data natural spectral domain extracted from its Laplace-Beltrami operator. There, multi-scale embedding into a small dimensional Euclidean space is accomplished while optimizing for a small number of coefficients. We provide a theoretical support and demonstrate that working in the natural eigenspace of the data, one could reduce the process complexity while maintaining the model fidelity. As examples, we apply the proposed approach to efficiently extract the dimensions and structures of data points given in a high dimensional space that is efficiently embedded in a low dimensional Euclidean space. We also efficiently canonize non-rigid shapes by embedding their intrinsic metric into R^3, a method often used for matching and classifying almost isometric articulated objects. Finally, we demonstrate the power of the method in exposing the style by which handwritten digits appear in a large collection of images. We also visualize clustering of digits, yet again by treating images as feature points that we map to a plane.